00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3660 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3262 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.038 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.067 > git --version # 'git version 2.39.2' 00:00:00.067 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.080 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.080 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.404 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.415 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.426 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:04.426 > git config core.sparsecheckout # timeout=10 00:00:04.437 > git read-tree -mu HEAD # timeout=10 00:00:04.453 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:04.473 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:04.473 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:04.576 [Pipeline] Start of Pipeline 00:00:04.592 [Pipeline] library 00:00:04.593 Loading library shm_lib@master 00:00:04.594 Library shm_lib@master is cached. Copying from home. 00:00:04.611 [Pipeline] node 00:00:04.618 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.619 [Pipeline] { 00:00:04.628 [Pipeline] catchError 00:00:04.629 [Pipeline] { 00:00:04.639 [Pipeline] wrap 00:00:04.648 [Pipeline] { 00:00:04.656 [Pipeline] stage 00:00:04.658 [Pipeline] { (Prologue) 00:00:04.923 [Pipeline] sh 00:00:05.209 + logger -p user.info -t JENKINS-CI 00:00:05.231 [Pipeline] echo 00:00:05.232 Node: GP6 00:00:05.241 [Pipeline] sh 00:00:05.539 [Pipeline] setCustomBuildProperty 00:00:05.552 [Pipeline] echo 00:00:05.554 Cleanup processes 00:00:05.559 [Pipeline] sh 00:00:05.840 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.840 3338715 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.852 [Pipeline] sh 00:00:06.137 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.137 ++ grep -v 'sudo pgrep' 00:00:06.137 ++ awk '{print $1}' 00:00:06.137 + sudo kill -9 00:00:06.137 + true 00:00:06.153 [Pipeline] cleanWs 00:00:06.162 [WS-CLEANUP] Deleting project workspace... 00:00:06.162 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.168 [WS-CLEANUP] done 00:00:06.172 [Pipeline] setCustomBuildProperty 00:00:06.181 [Pipeline] sh 00:00:06.458 + sudo git config --global --replace-all safe.directory '*' 00:00:06.541 [Pipeline] httpRequest 00:00:06.577 [Pipeline] echo 00:00:06.579 Sorcerer 10.211.164.101 is alive 00:00:06.588 [Pipeline] httpRequest 00:00:06.592 HttpMethod: GET 00:00:06.593 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.594 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.613 Response Code: HTTP/1.1 200 OK 00:00:06.614 Success: Status code 200 is in the accepted range: 200,404 00:00:06.614 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.995 [Pipeline] sh 00:00:09.281 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.304 [Pipeline] httpRequest 00:00:09.329 [Pipeline] echo 00:00:09.331 Sorcerer 10.211.164.101 is alive 00:00:09.343 [Pipeline] httpRequest 00:00:09.348 HttpMethod: GET 00:00:09.349 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.350 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.359 Response Code: HTTP/1.1 200 OK 00:00:09.359 Success: Status code 200 is in the accepted range: 200,404 00:00:09.360 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:07.900 [Pipeline] sh 00:01:08.186 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:10.735 [Pipeline] sh 00:01:11.022 + git -C spdk log --oneline -n5 00:01:11.022 719d03c6a sock/uring: only register net impl if supported 00:01:11.022 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:11.022 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:11.022 6c7c1f57e accel: add sequence outstanding stat 00:01:11.022 3bc8e6a26 accel: add utility to put task 00:01:11.042 [Pipeline] withCredentials 00:01:11.052 > git --version # timeout=10 00:01:11.064 > git --version # 'git version 2.39.2' 00:01:11.083 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:11.085 [Pipeline] { 00:01:11.097 [Pipeline] retry 00:01:11.099 [Pipeline] { 00:01:11.118 [Pipeline] sh 00:01:11.402 + git ls-remote http://dpdk.org/git/dpdk main 00:01:12.798 [Pipeline] } 00:01:12.819 [Pipeline] // retry 00:01:12.827 [Pipeline] } 00:01:12.850 [Pipeline] // withCredentials 00:01:12.858 [Pipeline] httpRequest 00:01:12.876 [Pipeline] echo 00:01:12.878 Sorcerer 10.211.164.101 is alive 00:01:12.886 [Pipeline] httpRequest 00:01:12.891 HttpMethod: GET 00:01:12.892 URL: http://10.211.164.101/packages/dpdk_7ab964bebd0f5a0cc9d25b6679a8cf3d69ba8365.tar.gz 00:01:12.892 Sending request to url: http://10.211.164.101/packages/dpdk_7ab964bebd0f5a0cc9d25b6679a8cf3d69ba8365.tar.gz 00:01:12.900 Response Code: HTTP/1.1 200 OK 00:01:12.901 Success: Status code 200 is in the accepted range: 200,404 00:01:12.901 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_7ab964bebd0f5a0cc9d25b6679a8cf3d69ba8365.tar.gz 00:01:21.856 [Pipeline] sh 00:01:22.143 + tar --no-same-owner -xf dpdk_7ab964bebd0f5a0cc9d25b6679a8cf3d69ba8365.tar.gz 00:01:23.557 [Pipeline] sh 00:01:23.840 + git -C dpdk log --oneline -n5 00:01:23.840 7ab964bebd net/mlx5: fix flow matcher object leak 00:01:23.840 cd00dce625 net/mlx5: add hairpin out-of-buffer per-port counter 00:01:23.840 d0f858a6c6 net/mlx5: add hairpin out-of-buffer global counter 00:01:23.840 b7d19ee4e5 net/mlx5: increase flow pattern template maximum 00:01:23.840 9ae63e8eb9 net/mlx5: fix GRE option HWS flow item validation 00:01:23.851 [Pipeline] } 00:01:23.868 [Pipeline] // stage 00:01:23.878 [Pipeline] stage 00:01:23.881 [Pipeline] { (Prepare) 00:01:23.899 [Pipeline] writeFile 00:01:23.910 [Pipeline] sh 00:01:24.188 + logger -p user.info -t JENKINS-CI 00:01:24.198 [Pipeline] sh 00:01:24.477 + logger -p user.info -t JENKINS-CI 00:01:24.489 [Pipeline] sh 00:01:24.771 + cat autorun-spdk.conf 00:01:24.771 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.771 SPDK_TEST_NVMF=1 00:01:24.771 SPDK_TEST_NVME_CLI=1 00:01:24.771 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.771 SPDK_TEST_NVMF_NICS=e810 00:01:24.771 SPDK_TEST_VFIOUSER=1 00:01:24.771 SPDK_RUN_UBSAN=1 00:01:24.771 NET_TYPE=phy 00:01:24.771 SPDK_TEST_NATIVE_DPDK=main 00:01:24.771 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.779 RUN_NIGHTLY=1 00:01:24.782 [Pipeline] readFile 00:01:24.800 [Pipeline] withEnv 00:01:24.802 [Pipeline] { 00:01:24.811 [Pipeline] sh 00:01:25.090 + set -ex 00:01:25.090 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:25.090 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.090 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.090 ++ SPDK_TEST_NVMF=1 00:01:25.090 ++ SPDK_TEST_NVME_CLI=1 00:01:25.090 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.090 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.090 ++ SPDK_TEST_VFIOUSER=1 00:01:25.090 ++ SPDK_RUN_UBSAN=1 00:01:25.090 ++ NET_TYPE=phy 00:01:25.090 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:25.090 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.090 ++ RUN_NIGHTLY=1 00:01:25.090 + case $SPDK_TEST_NVMF_NICS in 00:01:25.090 + DRIVERS=ice 00:01:25.090 + [[ tcp == \r\d\m\a ]] 00:01:25.090 + [[ -n ice ]] 00:01:25.090 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:25.090 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.389 rmmod: ERROR: Module irdma is not currently loaded 00:01:28.389 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.389 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.389 + true 00:01:28.389 + for D in $DRIVERS 00:01:28.389 + sudo modprobe ice 00:01:28.389 + exit 0 00:01:28.399 [Pipeline] } 00:01:28.415 [Pipeline] // withEnv 00:01:28.421 [Pipeline] } 00:01:28.440 [Pipeline] // stage 00:01:28.449 [Pipeline] catchError 00:01:28.451 [Pipeline] { 00:01:28.466 [Pipeline] timeout 00:01:28.466 Timeout set to expire in 50 min 00:01:28.468 [Pipeline] { 00:01:28.484 [Pipeline] stage 00:01:28.486 [Pipeline] { (Tests) 00:01:28.505 [Pipeline] sh 00:01:28.790 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.790 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.790 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.790 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.790 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.790 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.790 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.790 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.790 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.790 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.790 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.790 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.790 + source /etc/os-release 00:01:28.790 ++ NAME='Fedora Linux' 00:01:28.790 ++ VERSION='38 (Cloud Edition)' 00:01:28.790 ++ ID=fedora 00:01:28.790 ++ VERSION_ID=38 00:01:28.790 ++ VERSION_CODENAME= 00:01:28.790 ++ PLATFORM_ID=platform:f38 00:01:28.790 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.790 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.790 ++ LOGO=fedora-logo-icon 00:01:28.790 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.790 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.790 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.790 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.790 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.790 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.790 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.790 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.790 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.790 ++ SUPPORT_END=2024-05-14 00:01:28.790 ++ VARIANT='Cloud Edition' 00:01:28.790 ++ VARIANT_ID=cloud 00:01:28.790 + uname -a 00:01:28.790 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.790 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:29.727 Hugepages 00:01:29.727 node hugesize free / total 00:01:29.728 node0 1048576kB 0 / 0 00:01:29.728 node0 2048kB 0 / 0 00:01:29.728 node1 1048576kB 0 / 0 00:01:29.728 node1 2048kB 0 / 0 00:01:29.728 00:01:29.728 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.987 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:29.987 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:29.987 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:29.987 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:29.987 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:29.987 + rm -f /tmp/spdk-ld-path 00:01:29.987 + source autorun-spdk.conf 00:01:29.987 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.987 ++ SPDK_TEST_NVMF=1 00:01:29.987 ++ SPDK_TEST_NVME_CLI=1 00:01:29.987 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.987 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.987 ++ SPDK_TEST_VFIOUSER=1 00:01:29.987 ++ SPDK_RUN_UBSAN=1 00:01:29.987 ++ NET_TYPE=phy 00:01:29.987 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:29.987 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.987 ++ RUN_NIGHTLY=1 00:01:29.987 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.987 + [[ -n '' ]] 00:01:29.987 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.987 + for M in /var/spdk/build-*-manifest.txt 00:01:29.987 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.987 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.987 + for M in /var/spdk/build-*-manifest.txt 00:01:29.987 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.987 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.987 ++ uname 00:01:29.987 + [[ Linux == \L\i\n\u\x ]] 00:01:29.987 + sudo dmesg -T 00:01:29.987 + sudo dmesg --clear 00:01:29.987 + dmesg_pid=3340048 00:01:29.987 + [[ Fedora Linux == FreeBSD ]] 00:01:29.987 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.987 + sudo dmesg -Tw 00:01:29.987 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.987 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.987 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.987 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.987 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.987 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.987 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.987 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.987 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.987 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.988 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.988 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.988 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.988 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.988 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.988 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.988 Test configuration: 00:01:29.988 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.988 SPDK_TEST_NVMF=1 00:01:29.988 SPDK_TEST_NVME_CLI=1 00:01:29.988 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.988 SPDK_TEST_NVMF_NICS=e810 00:01:29.988 SPDK_TEST_VFIOUSER=1 00:01:29.988 SPDK_RUN_UBSAN=1 00:01:29.988 NET_TYPE=phy 00:01:29.988 SPDK_TEST_NATIVE_DPDK=main 00:01:29.988 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.988 RUN_NIGHTLY=1 13:09:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:29.988 13:09:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.988 13:09:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.247 13:09:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.247 13:09:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.247 13:09:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.247 13:09:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.247 13:09:27 -- paths/export.sh@5 -- $ export PATH 00:01:30.247 13:09:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.247 13:09:27 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:30.247 13:09:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:30.247 13:09:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720782567.XXXXXX 00:01:30.247 13:09:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720782567.eGzlK8 00:01:30.247 13:09:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:30.247 13:09:27 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:01:30.247 13:09:27 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.247 13:09:27 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:30.247 13:09:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.247 13:09:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.247 13:09:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:30.247 13:09:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:30.247 13:09:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.247 13:09:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:30.247 13:09:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:30.247 13:09:27 -- pm/common@17 -- $ local monitor 00:01:30.247 13:09:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.247 13:09:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.247 13:09:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.247 13:09:27 -- pm/common@21 -- $ date +%s 00:01:30.247 13:09:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.247 13:09:27 -- pm/common@21 -- $ date +%s 00:01:30.247 13:09:27 -- pm/common@25 -- $ sleep 1 00:01:30.247 13:09:27 -- pm/common@21 -- $ date +%s 00:01:30.247 13:09:27 -- pm/common@21 -- $ date +%s 00:01:30.247 13:09:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720782567 00:01:30.247 13:09:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720782567 00:01:30.247 13:09:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720782567 00:01:30.247 13:09:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720782567 00:01:30.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720782567_collect-vmstat.pm.log 00:01:30.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720782567_collect-cpu-load.pm.log 00:01:30.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720782567_collect-cpu-temp.pm.log 00:01:30.247 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720782567_collect-bmc-pm.bmc.pm.log 00:01:31.186 13:09:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:31.186 13:09:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.186 13:09:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.186 13:09:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.186 13:09:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.186 Fri Jul 12 11:09:28 AM UTC 2024 00:01:31.186 13:09:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.186 v24.09-pre-202-g719d03c6a 00:01:31.186 13:09:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.186 13:09:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.186 13:09:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.186 13:09:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:31.186 13:09:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.186 13:09:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.186 ************************************ 00:01:31.186 START TEST ubsan 00:01:31.186 ************************************ 00:01:31.186 13:09:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:31.186 using ubsan 00:01:31.186 00:01:31.186 real 0m0.000s 00:01:31.186 user 0m0.000s 00:01:31.186 sys 0m0.000s 00:01:31.186 13:09:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:31.186 13:09:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.186 ************************************ 00:01:31.186 END TEST ubsan 00:01:31.186 ************************************ 00:01:31.186 13:09:28 -- common/autotest_common.sh@1142 -- $ return 0 00:01:31.186 13:09:28 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:31.186 13:09:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:31.187 13:09:28 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:31.187 13:09:28 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:31.187 13:09:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.187 13:09:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.187 ************************************ 00:01:31.187 START TEST build_native_dpdk 00:01:31.187 ************************************ 00:01:31.187 13:09:28 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:31.187 7ab964bebd net/mlx5: fix flow matcher object leak 00:01:31.187 cd00dce625 net/mlx5: add hairpin out-of-buffer per-port counter 00:01:31.187 d0f858a6c6 net/mlx5: add hairpin out-of-buffer global counter 00:01:31.187 b7d19ee4e5 net/mlx5: increase flow pattern template maximum 00:01:31.187 9ae63e8eb9 net/mlx5: fix GRE option HWS flow item validation 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc1 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc1 21.11.0 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc1 '<' 21.11.0 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:31.187 13:09:28 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:31.187 patching file config/rte_config.h 00:01:31.187 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:31.187 13:09:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:35.385 The Meson build system 00:01:35.385 Version: 1.3.1 00:01:35.385 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:35.385 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:35.385 Build type: native build 00:01:35.385 Program cat found: YES (/usr/bin/cat) 00:01:35.385 Project name: DPDK 00:01:35.385 Project version: 24.07.0-rc1 00:01:35.385 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:35.385 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:35.385 Host machine cpu family: x86_64 00:01:35.385 Host machine cpu: x86_64 00:01:35.385 Message: ## Building in Developer Mode ## 00:01:35.385 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.385 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:35.385 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.385 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:35.385 Program cat found: YES (/usr/bin/cat) 00:01:35.385 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:35.385 Compiler for C supports arguments -march=native: YES 00:01:35.385 Checking for size of "void *" : 8 00:01:35.385 Checking for size of "void *" : 8 (cached) 00:01:35.385 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:35.385 Library m found: YES 00:01:35.385 Library numa found: YES 00:01:35.385 Has header "numaif.h" : YES 00:01:35.385 Library fdt found: NO 00:01:35.385 Library execinfo found: NO 00:01:35.385 Has header "execinfo.h" : YES 00:01:35.385 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.385 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.385 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.385 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.385 Run-time dependency openssl found: YES 3.0.9 00:01:35.385 Run-time dependency libpcap found: YES 1.10.4 00:01:35.385 Has header "pcap.h" with dependency libpcap: YES 00:01:35.385 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.385 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.385 Compiler for C supports arguments -Wformat: YES 00:01:35.385 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.385 Compiler for C supports arguments -Wformat-security: NO 00:01:35.385 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.385 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.385 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.385 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.385 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.385 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.385 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.385 Compiler for C supports arguments -Wundef: YES 00:01:35.385 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.385 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.385 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.385 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.385 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.385 Program objdump found: YES (/usr/bin/objdump) 00:01:35.385 Compiler for C supports arguments -mavx512f: YES 00:01:35.385 Checking if "AVX512 checking" compiles: YES 00:01:35.385 Fetching value of define "__SSE4_2__" : 1 00:01:35.385 Fetching value of define "__AES__" : 1 00:01:35.385 Fetching value of define "__AVX__" : 1 00:01:35.385 Fetching value of define "__AVX2__" : (undefined) 00:01:35.385 Fetching value of define "__AVX512BW__" : (undefined) 00:01:35.385 Fetching value of define "__AVX512CD__" : (undefined) 00:01:35.385 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:35.385 Fetching value of define "__AVX512F__" : (undefined) 00:01:35.385 Fetching value of define "__AVX512VL__" : (undefined) 00:01:35.385 Fetching value of define "__PCLMUL__" : 1 00:01:35.385 Fetching value of define "__RDRND__" : 1 00:01:35.385 Fetching value of define "__RDSEED__" : (undefined) 00:01:35.385 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.385 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.385 Message: lib/log: Defining dependency "log" 00:01:35.385 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.385 Message: lib/argparse: Defining dependency "argparse" 00:01:35.385 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.385 Checking for function "getentropy" : NO 00:01:35.385 Message: lib/eal: Defining dependency "eal" 00:01:35.385 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:35.385 Message: lib/ring: Defining dependency "ring" 00:01:35.385 Message: lib/rcu: Defining dependency "rcu" 00:01:35.385 Message: lib/mempool: Defining dependency "mempool" 00:01:35.385 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.385 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.385 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.385 Compiler for C supports arguments -mpclmul: YES 00:01:35.385 Compiler for C supports arguments -maes: YES 00:01:35.385 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.385 Compiler for C supports arguments -mavx512bw: YES 00:01:35.385 Compiler for C supports arguments -mavx512dq: YES 00:01:35.385 Compiler for C supports arguments -mavx512vl: YES 00:01:35.385 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.385 Compiler for C supports arguments -mavx2: YES 00:01:35.385 Compiler for C supports arguments -mavx: YES 00:01:35.385 Message: lib/net: Defining dependency "net" 00:01:35.385 Message: lib/meter: Defining dependency "meter" 00:01:35.385 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.385 Message: lib/pci: Defining dependency "pci" 00:01:35.385 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.385 Message: lib/metrics: Defining dependency "metrics" 00:01:35.385 Message: lib/hash: Defining dependency "hash" 00:01:35.385 Message: lib/timer: Defining dependency "timer" 00:01:35.385 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.385 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:35.385 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:35.385 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:35.385 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:35.385 Message: lib/acl: Defining dependency "acl" 00:01:35.385 Message: lib/bbdev: Defining dependency "bbdev" 00:01:35.385 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:35.385 Run-time dependency libelf found: YES 0.190 00:01:35.385 Message: lib/bpf: Defining dependency "bpf" 00:01:35.385 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:35.385 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.385 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.385 Message: lib/distributor: Defining dependency "distributor" 00:01:35.385 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.386 Message: lib/efd: Defining dependency "efd" 00:01:35.386 Message: lib/eventdev: Defining dependency "eventdev" 00:01:35.386 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:35.386 Message: lib/gpudev: Defining dependency "gpudev" 00:01:35.386 Message: lib/gro: Defining dependency "gro" 00:01:35.386 Message: lib/gso: Defining dependency "gso" 00:01:35.386 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:35.386 Message: lib/jobstats: Defining dependency "jobstats" 00:01:35.386 Message: lib/latencystats: Defining dependency "latencystats" 00:01:35.386 Message: lib/lpm: Defining dependency "lpm" 00:01:35.386 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.386 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:35.386 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:35.386 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:35.386 Message: lib/member: Defining dependency "member" 00:01:35.386 Message: lib/pcapng: Defining dependency "pcapng" 00:01:35.386 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.386 Message: lib/power: Defining dependency "power" 00:01:35.386 Message: lib/rawdev: Defining dependency "rawdev" 00:01:35.386 Message: lib/regexdev: Defining dependency "regexdev" 00:01:35.386 Message: lib/mldev: Defining dependency "mldev" 00:01:35.386 Message: lib/rib: Defining dependency "rib" 00:01:35.386 Message: lib/reorder: Defining dependency "reorder" 00:01:35.386 Message: lib/sched: Defining dependency "sched" 00:01:35.386 Message: lib/security: Defining dependency "security" 00:01:35.386 Message: lib/stack: Defining dependency "stack" 00:01:35.386 Has header "linux/userfaultfd.h" : YES 00:01:35.386 Has header "linux/vduse.h" : YES 00:01:35.386 Message: lib/vhost: Defining dependency "vhost" 00:01:35.386 Message: lib/ipsec: Defining dependency "ipsec" 00:01:35.386 Message: lib/pdcp: Defining dependency "pdcp" 00:01:35.386 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.386 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:35.386 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:35.386 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:35.386 Message: lib/fib: Defining dependency "fib" 00:01:35.386 Message: lib/port: Defining dependency "port" 00:01:35.386 Message: lib/pdump: Defining dependency "pdump" 00:01:35.386 Message: lib/table: Defining dependency "table" 00:01:35.386 Message: lib/pipeline: Defining dependency "pipeline" 00:01:35.386 Message: lib/graph: Defining dependency "graph" 00:01:35.386 Message: lib/node: Defining dependency "node" 00:01:37.298 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.298 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.298 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.298 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.298 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:37.298 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.298 Compiler for C supports arguments -Wno-format: YES 00:01:37.298 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.298 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.298 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.298 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.298 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.298 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.298 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.298 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.298 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.298 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.298 Has header "sys/epoll.h" : YES 00:01:37.298 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.298 Configuring doxy-api-html.conf using configuration 00:01:37.298 Configuring doxy-api-man.conf using configuration 00:01:37.298 Program mandb found: YES (/usr/bin/mandb) 00:01:37.298 Program sphinx-build found: NO 00:01:37.298 Configuring rte_build_config.h using configuration 00:01:37.298 Message: 00:01:37.298 ================= 00:01:37.298 Applications Enabled 00:01:37.298 ================= 00:01:37.298 00:01:37.298 apps: 00:01:37.298 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:37.298 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:37.298 test-pmd, test-regex, test-sad, test-security-perf, 00:01:37.298 00:01:37.298 Message: 00:01:37.298 ================= 00:01:37.298 Libraries Enabled 00:01:37.298 ================= 00:01:37.298 00:01:37.298 libs: 00:01:37.298 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:37.298 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:37.298 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:37.298 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:37.298 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:37.298 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:37.298 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:37.298 graph, node, 00:01:37.298 00:01:37.298 Message: 00:01:37.298 =============== 00:01:37.298 Drivers Enabled 00:01:37.298 =============== 00:01:37.298 00:01:37.298 common: 00:01:37.298 00:01:37.298 bus: 00:01:37.298 pci, vdev, 00:01:37.298 mempool: 00:01:37.298 ring, 00:01:37.298 dma: 00:01:37.298 00:01:37.298 net: 00:01:37.298 i40e, 00:01:37.298 raw: 00:01:37.298 00:01:37.298 crypto: 00:01:37.298 00:01:37.298 compress: 00:01:37.298 00:01:37.298 regex: 00:01:37.298 00:01:37.298 ml: 00:01:37.298 00:01:37.298 vdpa: 00:01:37.298 00:01:37.298 event: 00:01:37.298 00:01:37.298 baseband: 00:01:37.298 00:01:37.298 gpu: 00:01:37.298 00:01:37.298 00:01:37.298 Message: 00:01:37.298 ================= 00:01:37.298 Content Skipped 00:01:37.298 ================= 00:01:37.298 00:01:37.298 apps: 00:01:37.298 00:01:37.298 libs: 00:01:37.298 00:01:37.298 drivers: 00:01:37.298 common/cpt: not in enabled drivers build config 00:01:37.298 common/dpaax: not in enabled drivers build config 00:01:37.298 common/iavf: not in enabled drivers build config 00:01:37.298 common/idpf: not in enabled drivers build config 00:01:37.298 common/ionic: not in enabled drivers build config 00:01:37.298 common/mvep: not in enabled drivers build config 00:01:37.298 common/octeontx: not in enabled drivers build config 00:01:37.298 bus/auxiliary: not in enabled drivers build config 00:01:37.298 bus/cdx: not in enabled drivers build config 00:01:37.298 bus/dpaa: not in enabled drivers build config 00:01:37.298 bus/fslmc: not in enabled drivers build config 00:01:37.298 bus/ifpga: not in enabled drivers build config 00:01:37.298 bus/platform: not in enabled drivers build config 00:01:37.298 bus/uacce: not in enabled drivers build config 00:01:37.298 bus/vmbus: not in enabled drivers build config 00:01:37.298 common/cnxk: not in enabled drivers build config 00:01:37.298 common/mlx5: not in enabled drivers build config 00:01:37.298 common/nfp: not in enabled drivers build config 00:01:37.298 common/nitrox: not in enabled drivers build config 00:01:37.298 common/qat: not in enabled drivers build config 00:01:37.298 common/sfc_efx: not in enabled drivers build config 00:01:37.298 mempool/bucket: not in enabled drivers build config 00:01:37.298 mempool/cnxk: not in enabled drivers build config 00:01:37.298 mempool/dpaa: not in enabled drivers build config 00:01:37.298 mempool/dpaa2: not in enabled drivers build config 00:01:37.298 mempool/octeontx: not in enabled drivers build config 00:01:37.298 mempool/stack: not in enabled drivers build config 00:01:37.298 dma/cnxk: not in enabled drivers build config 00:01:37.298 dma/dpaa: not in enabled drivers build config 00:01:37.298 dma/dpaa2: not in enabled drivers build config 00:01:37.298 dma/hisilicon: not in enabled drivers build config 00:01:37.298 dma/idxd: not in enabled drivers build config 00:01:37.298 dma/ioat: not in enabled drivers build config 00:01:37.298 dma/odm: not in enabled drivers build config 00:01:37.298 dma/skeleton: not in enabled drivers build config 00:01:37.298 net/af_packet: not in enabled drivers build config 00:01:37.298 net/af_xdp: not in enabled drivers build config 00:01:37.298 net/ark: not in enabled drivers build config 00:01:37.298 net/atlantic: not in enabled drivers build config 00:01:37.298 net/avp: not in enabled drivers build config 00:01:37.298 net/axgbe: not in enabled drivers build config 00:01:37.298 net/bnx2x: not in enabled drivers build config 00:01:37.298 net/bnxt: not in enabled drivers build config 00:01:37.298 net/bonding: not in enabled drivers build config 00:01:37.298 net/cnxk: not in enabled drivers build config 00:01:37.298 net/cpfl: not in enabled drivers build config 00:01:37.298 net/cxgbe: not in enabled drivers build config 00:01:37.298 net/dpaa: not in enabled drivers build config 00:01:37.298 net/dpaa2: not in enabled drivers build config 00:01:37.298 net/e1000: not in enabled drivers build config 00:01:37.298 net/ena: not in enabled drivers build config 00:01:37.298 net/enetc: not in enabled drivers build config 00:01:37.298 net/enetfec: not in enabled drivers build config 00:01:37.298 net/enic: not in enabled drivers build config 00:01:37.298 net/failsafe: not in enabled drivers build config 00:01:37.298 net/fm10k: not in enabled drivers build config 00:01:37.298 net/gve: not in enabled drivers build config 00:01:37.298 net/hinic: not in enabled drivers build config 00:01:37.298 net/hns3: not in enabled drivers build config 00:01:37.298 net/iavf: not in enabled drivers build config 00:01:37.298 net/ice: not in enabled drivers build config 00:01:37.298 net/idpf: not in enabled drivers build config 00:01:37.298 net/igc: not in enabled drivers build config 00:01:37.298 net/ionic: not in enabled drivers build config 00:01:37.298 net/ipn3ke: not in enabled drivers build config 00:01:37.298 net/ixgbe: not in enabled drivers build config 00:01:37.298 net/mana: not in enabled drivers build config 00:01:37.298 net/memif: not in enabled drivers build config 00:01:37.298 net/mlx4: not in enabled drivers build config 00:01:37.298 net/mlx5: not in enabled drivers build config 00:01:37.298 net/mvneta: not in enabled drivers build config 00:01:37.298 net/mvpp2: not in enabled drivers build config 00:01:37.298 net/netvsc: not in enabled drivers build config 00:01:37.298 net/nfb: not in enabled drivers build config 00:01:37.298 net/nfp: not in enabled drivers build config 00:01:37.298 net/ngbe: not in enabled drivers build config 00:01:37.298 net/null: not in enabled drivers build config 00:01:37.298 net/octeontx: not in enabled drivers build config 00:01:37.298 net/octeon_ep: not in enabled drivers build config 00:01:37.298 net/pcap: not in enabled drivers build config 00:01:37.298 net/pfe: not in enabled drivers build config 00:01:37.298 net/qede: not in enabled drivers build config 00:01:37.298 net/ring: not in enabled drivers build config 00:01:37.298 net/sfc: not in enabled drivers build config 00:01:37.298 net/softnic: not in enabled drivers build config 00:01:37.298 net/tap: not in enabled drivers build config 00:01:37.298 net/thunderx: not in enabled drivers build config 00:01:37.298 net/txgbe: not in enabled drivers build config 00:01:37.298 net/vdev_netvsc: not in enabled drivers build config 00:01:37.298 net/vhost: not in enabled drivers build config 00:01:37.298 net/virtio: not in enabled drivers build config 00:01:37.298 net/vmxnet3: not in enabled drivers build config 00:01:37.298 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.298 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.298 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.298 raw/ifpga: not in enabled drivers build config 00:01:37.298 raw/ntb: not in enabled drivers build config 00:01:37.298 raw/skeleton: not in enabled drivers build config 00:01:37.298 crypto/armv8: not in enabled drivers build config 00:01:37.298 crypto/bcmfs: not in enabled drivers build config 00:01:37.298 crypto/caam_jr: not in enabled drivers build config 00:01:37.298 crypto/ccp: not in enabled drivers build config 00:01:37.298 crypto/cnxk: not in enabled drivers build config 00:01:37.299 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.299 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.299 crypto/ionic: not in enabled drivers build config 00:01:37.299 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.299 crypto/mlx5: not in enabled drivers build config 00:01:37.299 crypto/mvsam: not in enabled drivers build config 00:01:37.299 crypto/nitrox: not in enabled drivers build config 00:01:37.299 crypto/null: not in enabled drivers build config 00:01:37.299 crypto/octeontx: not in enabled drivers build config 00:01:37.299 crypto/openssl: not in enabled drivers build config 00:01:37.299 crypto/scheduler: not in enabled drivers build config 00:01:37.299 crypto/uadk: not in enabled drivers build config 00:01:37.299 crypto/virtio: not in enabled drivers build config 00:01:37.299 compress/isal: not in enabled drivers build config 00:01:37.299 compress/mlx5: not in enabled drivers build config 00:01:37.299 compress/nitrox: not in enabled drivers build config 00:01:37.299 compress/octeontx: not in enabled drivers build config 00:01:37.299 compress/uadk: not in enabled drivers build config 00:01:37.299 compress/zlib: not in enabled drivers build config 00:01:37.299 regex/mlx5: not in enabled drivers build config 00:01:37.299 regex/cn9k: not in enabled drivers build config 00:01:37.299 ml/cnxk: not in enabled drivers build config 00:01:37.299 vdpa/ifc: not in enabled drivers build config 00:01:37.299 vdpa/mlx5: not in enabled drivers build config 00:01:37.299 vdpa/nfp: not in enabled drivers build config 00:01:37.299 vdpa/sfc: not in enabled drivers build config 00:01:37.299 event/cnxk: not in enabled drivers build config 00:01:37.299 event/dlb2: not in enabled drivers build config 00:01:37.299 event/dpaa: not in enabled drivers build config 00:01:37.299 event/dpaa2: not in enabled drivers build config 00:01:37.299 event/dsw: not in enabled drivers build config 00:01:37.299 event/opdl: not in enabled drivers build config 00:01:37.299 event/skeleton: not in enabled drivers build config 00:01:37.299 event/sw: not in enabled drivers build config 00:01:37.299 event/octeontx: not in enabled drivers build config 00:01:37.299 baseband/acc: not in enabled drivers build config 00:01:37.299 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.299 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.299 baseband/la12xx: not in enabled drivers build config 00:01:37.299 baseband/null: not in enabled drivers build config 00:01:37.299 baseband/turbo_sw: not in enabled drivers build config 00:01:37.299 gpu/cuda: not in enabled drivers build config 00:01:37.299 00:01:37.299 00:01:37.299 Build targets in project: 224 00:01:37.299 00:01:37.299 DPDK 24.07.0-rc1 00:01:37.299 00:01:37.299 User defined options 00:01:37.299 libdir : lib 00:01:37.299 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.299 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.299 c_link_args : 00:01:37.299 enable_docs : false 00:01:37.299 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.299 enable_kmods : false 00:01:37.299 machine : native 00:01:37.299 tests : false 00:01:37.299 00:01:37.299 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.299 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.299 13:09:34 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:37.299 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.299 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.299 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.299 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.299 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.299 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.299 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.299 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.299 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.299 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.299 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.299 [11/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.299 [12/723] Linking static target lib/librte_kvargs.a 00:01:37.299 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.299 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.559 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.559 [16/723] Linking static target lib/librte_log.a 00:01:37.559 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:37.559 [18/723] Linking static target lib/librte_argparse.a 00:01:37.822 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.083 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.083 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.083 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.083 [23/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.083 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.083 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.348 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.348 [27/723] Linking target lib/librte_log.so.24.2 00:01:38.348 [28/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.348 [29/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.348 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.348 [31/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.348 [32/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.348 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.348 [34/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.348 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.348 [36/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.348 [37/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.348 [38/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.348 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.348 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.348 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.348 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.348 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.348 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.348 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.348 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.348 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.348 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.348 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.348 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.348 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.348 [52/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.348 [53/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:38.348 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.609 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.609 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.609 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.609 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.609 [59/723] Linking target lib/librte_argparse.so.24.2 00:01:38.609 [60/723] Linking target lib/librte_kvargs.so.24.2 00:01:38.609 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.609 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.609 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.869 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:38.869 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.869 [66/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.869 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.869 [68/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.869 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.869 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.869 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.133 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.133 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.392 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.392 [75/723] Linking static target lib/librte_pci.a 00:01:39.392 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.392 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:39.392 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.392 [79/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.392 [80/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.392 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.392 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.652 [83/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.652 [84/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.652 [85/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.652 [86/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.652 [87/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.652 [88/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.652 [89/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.652 [90/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.652 [91/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.652 [92/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.652 [93/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.652 [94/723] Linking static target lib/librte_ring.a 00:01:39.652 [95/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.652 [96/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.652 [97/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.652 [98/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.652 [99/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.652 [100/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.652 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.652 [102/723] Linking static target lib/librte_meter.a 00:01:39.652 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.652 [104/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.652 [105/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.652 [106/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.652 [107/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.652 [108/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.652 [109/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.652 [110/723] Linking static target lib/librte_telemetry.a 00:01:39.914 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.914 [112/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.914 [113/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.914 [114/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.914 [115/723] Linking static target lib/librte_net.a 00:01:39.914 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.172 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.172 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.172 [119/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.172 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.172 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.172 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.172 [123/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.172 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.172 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.478 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.478 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.478 [128/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.478 [129/723] Linking static target lib/librte_mempool.a 00:01:40.478 [130/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.478 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.478 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.478 [133/723] Linking target lib/librte_telemetry.so.24.2 00:01:40.478 [134/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.478 [135/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.478 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.478 [137/723] Linking static target lib/librte_eal.a 00:01:40.478 [138/723] Linking static target lib/librte_cmdline.a 00:01:40.735 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:40.735 [140/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:40.735 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.735 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.735 [143/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.735 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:40.735 [145/723] Linking static target lib/librte_cfgfile.a 00:01:40.735 [146/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:40.735 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:40.735 [148/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.992 [149/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:40.992 [150/723] Linking static target lib/librte_metrics.a 00:01:40.992 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.992 [152/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.992 [153/723] Linking static target lib/librte_rcu.a 00:01:40.992 [154/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:40.992 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:41.252 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:41.252 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:41.252 [158/723] Linking static target lib/librte_bitratestats.a 00:01:41.252 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:41.252 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:41.252 [161/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:41.252 [162/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:41.514 [163/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [164/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.514 [165/723] Linking static target lib/librte_mbuf.a 00:01:41.514 [166/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:41.514 [168/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [169/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [170/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.514 [171/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.514 [172/723] Linking static target lib/librte_timer.a 00:01:41.514 [173/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.514 [174/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:41.776 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:41.776 [176/723] Linking static target lib/librte_bbdev.a 00:01:41.776 [177/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.776 [178/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:41.776 [179/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.776 [180/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:41.776 [181/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.776 [182/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:41.776 [183/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.038 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.038 [185/723] Linking static target lib/librte_compressdev.a 00:01:42.038 [186/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:42.038 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.038 [188/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.038 [189/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.038 [190/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:42.038 [191/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:42.303 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:42.303 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.562 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:42.562 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:42.822 [196/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.822 [197/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:42.822 [198/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.822 [199/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.822 [200/723] Linking static target lib/librte_dmadev.a 00:01:42.822 [201/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:42.822 [202/723] Linking static target lib/librte_distributor.a 00:01:42.822 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:42.822 [204/723] Linking static target lib/librte_bpf.a 00:01:42.822 [205/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:43.085 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:43.085 [207/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:43.085 [208/723] Linking static target lib/librte_dispatcher.a 00:01:43.086 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:43.086 [210/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:43.086 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:43.086 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:43.086 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:43.086 [214/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:43.086 [215/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:43.086 [216/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.086 [217/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.086 [218/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:43.086 [219/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:43.086 [220/723] Linking static target lib/librte_gpudev.a 00:01:43.348 [221/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:43.348 [222/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:43.348 [223/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.348 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:43.348 [225/723] Linking static target lib/librte_gro.a 00:01:43.348 [226/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.348 [227/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:43.348 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:43.348 [229/723] Linking static target lib/librte_jobstats.a 00:01:43.348 [230/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.348 [231/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:43.348 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:43.348 [233/723] Linking static target lib/librte_gso.a 00:01:43.609 [234/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.609 [235/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:43.609 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:43.609 [237/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.609 [238/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:43.609 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:43.609 [240/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.609 [241/723] Linking static target lib/librte_latencystats.a 00:01:43.869 [242/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.869 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:43.869 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:43.869 [245/723] Linking static target lib/librte_ip_frag.a 00:01:43.869 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.869 [247/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:43.869 [248/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:43.869 [249/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:43.869 [250/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:43.869 [251/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:44.134 [252/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:44.134 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:44.134 [254/723] Linking static target lib/librte_efd.a 00:01:44.134 [255/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:44.134 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.134 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:44.399 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.399 [259/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.399 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.399 [261/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:44.399 [262/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.399 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:44.399 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:44.399 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.661 [266/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.661 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.661 [268/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.661 [269/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:44.661 [270/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.661 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:44.661 [272/723] Linking static target lib/librte_regexdev.a 00:01:44.920 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.920 [274/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:44.920 [275/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:44.920 [276/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:44.920 [277/723] Linking static target lib/librte_rawdev.a 00:01:44.920 [278/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:44.920 [279/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:44.920 [280/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:44.920 [281/723] Linking static target lib/librte_pcapng.a 00:01:44.920 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.920 [283/723] Linking static target lib/librte_power.a 00:01:44.920 [284/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:44.920 [285/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:44.920 [286/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:44.920 [287/723] Linking static target lib/librte_stack.a 00:01:44.920 [288/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:45.182 [289/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:45.182 [290/723] Linking static target lib/librte_mldev.a 00:01:45.182 [291/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:45.182 [292/723] Linking static target lib/librte_lpm.a 00:01:45.182 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:45.182 [294/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:45.444 [295/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.444 [296/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.444 [297/723] Linking static target lib/librte_reorder.a 00:01:45.444 [298/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.445 [299/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:45.445 [300/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.445 [301/723] Linking static target lib/acl/libavx2_tmp.a 00:01:45.445 [302/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:45.445 [303/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.445 [304/723] Linking static target lib/librte_cryptodev.a 00:01:45.445 [305/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.445 [306/723] Linking static target lib/librte_security.a 00:01:45.445 [307/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.706 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.706 [309/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.706 [310/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.706 [311/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.706 [312/723] Linking static target lib/librte_hash.a 00:01:45.706 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:45.968 [314/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:45.968 [315/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.968 [316/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.968 [317/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.968 [318/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:45.968 [319/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.968 [320/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.968 [321/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.968 [322/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:45.968 [323/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:45.968 [324/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:45.968 [325/723] Linking static target lib/librte_rib.a 00:01:45.968 [326/723] Linking static target lib/acl/libavx512_tmp.a 00:01:45.968 [327/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:45.968 [328/723] Linking static target lib/librte_acl.a 00:01:45.968 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:46.229 [330/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:46.229 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:46.229 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:46.229 [333/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.229 [334/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:46.229 [335/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:46.229 [336/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:46.229 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:46.229 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:46.489 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.489 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:46.748 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.748 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:46.748 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.748 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:47.321 [345/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.322 [346/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:47.322 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:47.322 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:47.322 [349/723] Linking static target lib/librte_eventdev.a 00:01:47.322 [350/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:47.322 [351/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:47.322 [352/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:47.322 [353/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.322 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:47.322 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:47.322 [356/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:47.582 [357/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:47.582 [358/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:47.582 [359/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.582 [360/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:47.582 [361/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:47.582 [362/723] Linking static target lib/librte_member.a 00:01:47.582 [363/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.582 [364/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:47.582 [365/723] Linking static target lib/librte_sched.a 00:01:47.582 [366/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:47.582 [367/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:47.582 [368/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:47.582 [369/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:47.582 [370/723] Linking static target lib/librte_fib.a 00:01:47.582 [371/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:47.582 [372/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:47.582 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:47.872 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:47.872 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:47.872 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:47.872 [377/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:47.872 [378/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:48.149 [379/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:48.149 [380/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.149 [381/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.149 [382/723] Linking static target lib/librte_ethdev.a 00:01:48.149 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:48.149 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:48.149 [385/723] Linking static target lib/librte_ipsec.a 00:01:48.149 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:48.149 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.149 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.411 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:48.411 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.411 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:48.671 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:48.671 [393/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:48.671 [394/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:48.671 [395/723] Linking static target lib/librte_pdump.a 00:01:48.671 [396/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:48.671 [397/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:48.671 [398/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.671 [399/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.671 [400/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:48.671 [401/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:48.671 [402/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:48.934 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:48.934 [404/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:48.934 [405/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:48.934 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:48.934 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.934 [408/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.934 [409/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:49.196 [410/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.196 [411/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:49.196 [412/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:49.196 [413/723] Linking static target lib/librte_pdcp.a 00:01:49.196 [414/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:49.196 [415/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:49.196 [416/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:49.196 [417/723] Linking static target lib/librte_table.a 00:01:49.196 [418/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:49.196 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:49.457 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:49.458 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.720 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:49.720 [423/723] Linking static target lib/librte_graph.a 00:01:49.720 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.720 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.720 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.720 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.982 [428/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:49.982 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:49.982 [430/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:49.982 [431/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:49.982 [432/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:49.982 [433/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:49.982 [434/723] Linking static target lib/librte_port.a 00:01:49.982 [435/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:49.982 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:49.982 [437/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.246 [438/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.246 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:50.246 [440/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.246 [441/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.246 [442/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.246 [443/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:50.507 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.507 [445/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:50.507 [446/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.507 [447/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.507 [448/723] Linking static target drivers/librte_bus_vdev.a 00:01:50.507 [449/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.507 [450/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:50.770 [451/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.770 [452/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:50.770 [453/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.770 [454/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:50.770 [455/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.770 [456/723] Linking static target drivers/librte_bus_pci.a 00:01:50.770 [457/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:50.770 [458/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.770 [459/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:50.770 [460/723] Linking static target lib/librte_node.a 00:01:50.770 [461/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:50.770 [462/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.031 [463/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:51.031 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:51.031 [465/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.031 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:51.031 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:51.031 [468/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:51.295 [469/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:51.295 [470/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.295 [471/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.295 [472/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.295 [473/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:51.295 [474/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:51.295 [475/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:51.571 [476/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.571 [477/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:51.571 [478/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:51.571 [479/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:51.571 [480/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.571 [481/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:51.571 [482/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.571 [483/723] Linking static target drivers/librte_mempool_ring.a 00:01:51.571 [484/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:51.571 [485/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.571 [486/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.836 [487/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.836 [488/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:51.836 [489/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:51.836 [490/723] Linking target lib/librte_eal.so.24.2 00:01:51.836 [491/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:51.836 [492/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:51.836 [493/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:51.836 [494/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:52.096 [495/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:52.096 [496/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:52.096 [497/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:52.096 [498/723] Linking target lib/librte_ring.so.24.2 00:01:52.096 [499/723] Linking target lib/librte_meter.so.24.2 00:01:52.096 [500/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:52.361 [501/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:52.361 [502/723] Linking target lib/librte_pci.so.24.2 00:01:52.361 [503/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:52.361 [504/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:52.361 [505/723] Linking target lib/librte_timer.so.24.2 00:01:52.361 [506/723] Linking target lib/librte_acl.so.24.2 00:01:52.361 [507/723] Linking target lib/librte_cfgfile.so.24.2 00:01:52.361 [508/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:52.361 [509/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:52.361 [510/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:52.361 [511/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:52.361 [512/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:52.361 [513/723] Linking target lib/librte_dmadev.so.24.2 00:01:52.361 [514/723] Linking target lib/librte_rawdev.so.24.2 00:01:52.361 [515/723] Linking target lib/librte_rcu.so.24.2 00:01:52.361 [516/723] Linking target lib/librte_stack.so.24.2 00:01:52.621 [517/723] Linking target lib/librte_jobstats.so.24.2 00:01:52.621 [518/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:52.621 [519/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:52.621 [520/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:52.621 [521/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:52.622 [522/723] Linking target lib/librte_mempool.so.24.2 00:01:52.622 [523/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:52.622 [524/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:52.622 [525/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:52.622 [526/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:52.622 [527/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:52.622 [528/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:52.622 [529/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:52.622 [530/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:52.622 [531/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:52.883 [532/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:52.883 [533/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:52.883 [534/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:52.883 [535/723] Linking target lib/librte_mbuf.so.24.2 00:01:52.883 [536/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:52.883 [537/723] Linking target lib/librte_rib.so.24.2 00:01:52.883 [538/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:52.883 [539/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:53.145 [540/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:53.145 [541/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.145 [542/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:53.145 [543/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:53.145 [544/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:53.145 [545/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:53.145 [546/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.145 [547/723] Linking target lib/librte_net.so.24.2 00:01:53.145 [548/723] Linking target lib/librte_compressdev.so.24.2 00:01:53.145 [549/723] Linking target lib/librte_bbdev.so.24.2 00:01:53.145 [550/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:53.145 [551/723] Linking target lib/librte_cryptodev.so.24.2 00:01:53.145 [552/723] Linking target lib/librte_distributor.so.24.2 00:01:53.145 [553/723] Linking target lib/librte_gpudev.so.24.2 00:01:53.145 [554/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:53.145 [555/723] Linking target lib/librte_regexdev.so.24.2 00:01:53.407 [556/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:53.407 [557/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:53.408 [558/723] Linking target lib/librte_mldev.so.24.2 00:01:53.408 [559/723] Linking target lib/librte_reorder.so.24.2 00:01:53.408 [560/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:53.408 [561/723] Linking target lib/librte_sched.so.24.2 00:01:53.408 [562/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:53.408 [563/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:53.408 [564/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:53.408 [565/723] Linking target lib/librte_fib.so.24.2 00:01:53.408 [566/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:53.408 [567/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:53.408 [568/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:53.408 [569/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:53.408 [570/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:53.408 [571/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:53.408 [572/723] Linking target lib/librte_cmdline.so.24.2 00:01:53.408 [573/723] Linking target lib/librte_hash.so.24.2 00:01:53.408 [574/723] Linking target lib/librte_security.so.24.2 00:01:53.408 [575/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:53.408 [576/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:53.670 [577/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:53.670 [578/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:53.670 [579/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:53.670 [580/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:53.670 [581/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.670 [582/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:53.670 [583/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:53.670 [584/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:53.670 [585/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:53.670 [586/723] Linking target lib/librte_lpm.so.24.2 00:01:53.670 [587/723] Linking target lib/librte_pdcp.so.24.2 00:01:53.930 [588/723] Linking target lib/librte_efd.so.24.2 00:01:53.930 [589/723] Linking target lib/librte_member.so.24.2 00:01:53.930 [590/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:53.930 [591/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:53.930 [592/723] Linking target lib/librte_ipsec.so.24.2 00:01:53.930 [593/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:53.930 [594/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:54.192 [595/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:54.192 [596/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:54.192 [597/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:54.192 [598/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:54.192 [599/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:54.192 [600/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:54.451 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:54.451 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:54.451 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:54.451 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:54.451 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:54.451 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:54.451 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:54.711 [608/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:54.711 [609/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:54.711 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:54.711 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.973 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:54.973 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:54.973 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:54.973 [615/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:54.973 [616/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.973 [617/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:54.973 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:55.232 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.232 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:55.232 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:55.232 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:55.232 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:55.490 [624/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:55.490 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:55.490 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.748 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.748 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:55.748 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.748 [630/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:55.748 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:55.748 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.748 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.748 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:55.748 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.748 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.748 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:56.006 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:56.006 [639/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.006 [640/723] Linking target lib/librte_ethdev.so.24.2 00:01:56.006 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:56.006 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:56.006 [643/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:56.264 [644/723] Linking target lib/librte_metrics.so.24.2 00:01:56.264 [645/723] Linking target lib/librte_gro.so.24.2 00:01:56.264 [646/723] Linking target lib/librte_pcapng.so.24.2 00:01:56.264 [647/723] Linking target lib/librte_gso.so.24.2 00:01:56.264 [648/723] Linking target lib/librte_ip_frag.so.24.2 00:01:56.264 [649/723] Linking target lib/librte_power.so.24.2 00:01:56.264 [650/723] Linking target lib/librte_bpf.so.24.2 00:01:56.264 [651/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:56.264 [652/723] Linking target lib/librte_eventdev.so.24.2 00:01:56.264 [653/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:56.264 [654/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:56.264 [655/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:56.264 [656/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:56.264 [657/723] Linking target lib/librte_latencystats.so.24.2 00:01:56.264 [658/723] Linking target lib/librte_bitratestats.so.24.2 00:01:56.264 [659/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:56.264 [660/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:56.522 [661/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:56.522 [662/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:56.522 [663/723] Linking target lib/librte_dispatcher.so.24.2 00:01:56.522 [664/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:56.522 [665/723] Linking target lib/librte_port.so.24.2 00:01:56.522 [666/723] Linking target lib/librte_pdump.so.24.2 00:01:56.522 [667/723] Linking target lib/librte_graph.so.24.2 00:01:56.522 [668/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:56.522 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:56.522 [670/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:56.522 [671/723] Linking target lib/librte_table.so.24.2 00:01:56.522 [672/723] Linking target lib/librte_node.so.24.2 00:01:56.522 [673/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:56.780 [674/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:56.780 [675/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:56.780 [676/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.780 [677/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.038 [678/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:57.296 [679/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.296 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:57.554 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:57.812 [682/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.812 [683/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:57.812 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.070 [685/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:58.070 [686/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:58.071 [687/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.071 [688/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.329 [689/723] Linking static target drivers/librte_net_i40e.a 00:01:58.587 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:58.844 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.844 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:59.102 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:59.361 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:59.928 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:08.075 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:08.075 [697/723] Linking static target lib/librte_pipeline.a 00:02:08.075 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:08.334 [699/723] Linking static target lib/librte_vhost.a 00:02:08.901 [700/723] Linking target app/dpdk-test-acl 00:02:08.901 [701/723] Linking target app/dpdk-test-cmdline 00:02:08.901 [702/723] Linking target app/dpdk-dumpcap 00:02:08.901 [703/723] Linking target app/dpdk-test-flow-perf 00:02:08.901 [704/723] Linking target app/dpdk-test-sad 00:02:08.901 [705/723] Linking target app/dpdk-test-fib 00:02:08.901 [706/723] Linking target app/dpdk-proc-info 00:02:08.901 [707/723] Linking target app/dpdk-test-compress-perf 00:02:08.901 [708/723] Linking target app/dpdk-test-mldev 00:02:08.901 [709/723] Linking target app/dpdk-graph 00:02:08.901 [710/723] Linking target app/dpdk-pdump 00:02:08.901 [711/723] Linking target app/dpdk-test-regex 00:02:08.902 [712/723] Linking target app/dpdk-test-pipeline 00:02:08.902 [713/723] Linking target app/dpdk-test-eventdev 00:02:08.902 [714/723] Linking target app/dpdk-test-gpudev 00:02:08.902 [715/723] Linking target app/dpdk-test-bbdev 00:02:08.902 [716/723] Linking target app/dpdk-test-security-perf 00:02:08.902 [717/723] Linking target app/dpdk-test-crypto-perf 00:02:08.902 [718/723] Linking target app/dpdk-test-dma-perf 00:02:08.902 [719/723] Linking target app/dpdk-testpmd 00:02:09.160 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.419 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:10.799 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.799 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:10.799 13:10:08 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:10.799 13:10:08 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:10.799 13:10:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:10.799 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:10.799 [0/1] Installing files. 00:02:11.063 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:11.063 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.329 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.329 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.330 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:11.593 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:11.593 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:11.593 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.593 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:11.593 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.597 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:11.597 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:11.597 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:11.597 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:11.597 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:11.597 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:11.597 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:11.597 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:11.597 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:11.597 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:11.597 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:11.598 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:11.598 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:11.598 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:11.598 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:11.598 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:11.598 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:11.598 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:11.598 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:11.598 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:11.598 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:11.598 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:11.598 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:11.598 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:11.598 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:11.598 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:11.598 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:11.598 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:11.598 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:11.598 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:11.598 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:11.598 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:11.598 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:11.598 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:11.598 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:11.598 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:11.598 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:11.598 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:11.598 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:11.598 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:11.598 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:11.598 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:11.598 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:11.598 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:11.598 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:11.598 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:11.598 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:11.598 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:11.598 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:11.598 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:11.598 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:11.598 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:11.598 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:11.598 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:11.598 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:11.598 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:11.598 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:11.598 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:11.598 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:11.598 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:11.598 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:11.598 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:11.598 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:11.598 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:11.598 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:11.598 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:11.598 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:11.598 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:11.598 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:11.598 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:11.598 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:11.598 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:11.598 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:11.598 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:11.598 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:11.598 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:11.598 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:11.598 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:11.598 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:11.598 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:11.598 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:11.598 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:11.598 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:11.598 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:11.598 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:11.598 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:11.598 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:11.598 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:11.598 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:11.598 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:11.598 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:11.598 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:11.598 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:11.598 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:11.598 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:11.598 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:11.598 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:11.598 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:11.598 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:11.598 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:11.598 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:11.598 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:11.598 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:11.598 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:11.598 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:11.598 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:11.598 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:11.598 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:11.598 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:11.598 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:11.598 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:11.599 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:11.599 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:11.599 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:11.599 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:11.599 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:11.599 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:11.599 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:11.599 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:11.599 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:11.599 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:11.599 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:11.599 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:11.599 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:11.599 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:11.599 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:11.599 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:11.599 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:11.599 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:11.599 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:11.599 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:11.599 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:11.599 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:11.599 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:11.599 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:11.599 13:10:09 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:11.599 13:10:09 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.599 00:02:11.599 real 0m40.452s 00:02:11.599 user 13m58.632s 00:02:11.599 sys 2m0.688s 00:02:11.599 13:10:09 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.599 13:10:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:11.599 ************************************ 00:02:11.599 END TEST build_native_dpdk 00:02:11.599 ************************************ 00:02:11.599 13:10:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:11.599 13:10:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.599 13:10:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.599 13:10:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:11.857 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:11.857 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.857 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.857 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:12.114 Using 'verbs' RDMA provider 00:02:22.681 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:32.687 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:32.687 Creating mk/config.mk...done. 00:02:32.687 Creating mk/cc.flags.mk...done. 00:02:32.687 Type 'make' to build. 00:02:32.687 13:10:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:32.687 13:10:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:32.687 13:10:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:32.687 13:10:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.687 ************************************ 00:02:32.687 START TEST make 00:02:32.687 ************************************ 00:02:32.687 13:10:28 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:32.687 make[1]: Nothing to be done for 'all'. 00:02:33.643 The Meson build system 00:02:33.643 Version: 1.3.1 00:02:33.643 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:33.643 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.643 Build type: native build 00:02:33.643 Project name: libvfio-user 00:02:33.643 Project version: 0.0.1 00:02:33.643 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:33.643 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:33.643 Host machine cpu family: x86_64 00:02:33.643 Host machine cpu: x86_64 00:02:33.643 Run-time dependency threads found: YES 00:02:33.643 Library dl found: YES 00:02:33.643 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:33.643 Run-time dependency json-c found: YES 0.17 00:02:33.643 Run-time dependency cmocka found: YES 1.1.7 00:02:33.643 Program pytest-3 found: NO 00:02:33.643 Program flake8 found: NO 00:02:33.643 Program misspell-fixer found: NO 00:02:33.643 Program restructuredtext-lint found: NO 00:02:33.643 Program valgrind found: YES (/usr/bin/valgrind) 00:02:33.643 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.643 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.643 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.643 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:33.643 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:33.643 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:33.643 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:33.643 Build targets in project: 8 00:02:33.643 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:33.643 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:33.643 00:02:33.643 libvfio-user 0.0.1 00:02:33.643 00:02:33.643 User defined options 00:02:33.643 buildtype : debug 00:02:33.643 default_library: shared 00:02:33.643 libdir : /usr/local/lib 00:02:33.643 00:02:33.643 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.222 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:34.484 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:34.484 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:34.484 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:34.485 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:34.485 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:34.485 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:34.485 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:34.485 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:34.485 [9/37] Compiling C object samples/null.p/null.c.o 00:02:34.485 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:34.485 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:34.485 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:34.485 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:34.485 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:34.485 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:34.485 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:34.485 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:34.485 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:34.485 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:34.485 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:34.745 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:34.745 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:34.746 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:34.746 [24/37] Compiling C object samples/server.p/server.c.o 00:02:34.746 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:34.746 [26/37] Compiling C object samples/client.p/client.c.o 00:02:34.746 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:34.746 [28/37] Linking target samples/client 00:02:34.746 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:34.746 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:34.746 [31/37] Linking target test/unit_tests 00:02:35.007 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:35.007 [33/37] Linking target samples/null 00:02:35.007 [34/37] Linking target samples/gpio-pci-idio-16 00:02:35.007 [35/37] Linking target samples/lspci 00:02:35.007 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:35.007 [37/37] Linking target samples/server 00:02:35.007 INFO: autodetecting backend as ninja 00:02:35.007 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.270 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.839 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:35.839 ninja: no work to do. 00:02:48.031 CC lib/ut_mock/mock.o 00:02:48.031 CC lib/log/log.o 00:02:48.032 CC lib/log/log_flags.o 00:02:48.032 CC lib/log/log_deprecated.o 00:02:48.032 CC lib/ut/ut.o 00:02:48.032 LIB libspdk_ut.a 00:02:48.032 LIB libspdk_log.a 00:02:48.032 LIB libspdk_ut_mock.a 00:02:48.032 SO libspdk_ut.so.2.0 00:02:48.032 SO libspdk_ut_mock.so.6.0 00:02:48.032 SO libspdk_log.so.7.0 00:02:48.032 SYMLINK libspdk_ut.so 00:02:48.032 SYMLINK libspdk_ut_mock.so 00:02:48.032 SYMLINK libspdk_log.so 00:02:48.032 CC lib/dma/dma.o 00:02:48.032 CXX lib/trace_parser/trace.o 00:02:48.032 CC lib/ioat/ioat.o 00:02:48.032 CC lib/util/base64.o 00:02:48.032 CC lib/util/bit_array.o 00:02:48.032 CC lib/util/cpuset.o 00:02:48.032 CC lib/util/crc16.o 00:02:48.032 CC lib/util/crc32.o 00:02:48.032 CC lib/util/crc32c.o 00:02:48.032 CC lib/util/crc32_ieee.o 00:02:48.032 CC lib/util/crc64.o 00:02:48.032 CC lib/util/dif.o 00:02:48.032 CC lib/util/fd.o 00:02:48.032 CC lib/util/file.o 00:02:48.032 CC lib/util/hexlify.o 00:02:48.032 CC lib/util/iov.o 00:02:48.032 CC lib/util/math.o 00:02:48.032 CC lib/util/pipe.o 00:02:48.032 CC lib/util/strerror_tls.o 00:02:48.032 CC lib/util/string.o 00:02:48.032 CC lib/util/uuid.o 00:02:48.032 CC lib/util/fd_group.o 00:02:48.032 CC lib/util/xor.o 00:02:48.032 CC lib/util/zipf.o 00:02:48.032 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.032 CC lib/vfio_user/host/vfio_user.o 00:02:48.032 LIB libspdk_dma.a 00:02:48.032 SO libspdk_dma.so.4.0 00:02:48.032 SYMLINK libspdk_dma.so 00:02:48.032 LIB libspdk_ioat.a 00:02:48.032 SO libspdk_ioat.so.7.0 00:02:48.032 SYMLINK libspdk_ioat.so 00:02:48.032 LIB libspdk_vfio_user.a 00:02:48.289 SO libspdk_vfio_user.so.5.0 00:02:48.289 SYMLINK libspdk_vfio_user.so 00:02:48.289 LIB libspdk_util.a 00:02:48.289 SO libspdk_util.so.9.1 00:02:48.546 SYMLINK libspdk_util.so 00:02:48.804 CC lib/rdma_utils/rdma_utils.o 00:02:48.804 CC lib/env_dpdk/env.o 00:02:48.804 CC lib/idxd/idxd.o 00:02:48.804 CC lib/vmd/vmd.o 00:02:48.804 CC lib/idxd/idxd_user.o 00:02:48.804 CC lib/env_dpdk/memory.o 00:02:48.804 CC lib/vmd/led.o 00:02:48.804 CC lib/rdma_provider/common.o 00:02:48.804 CC lib/json/json_parse.o 00:02:48.804 CC lib/env_dpdk/pci.o 00:02:48.804 CC lib/conf/conf.o 00:02:48.804 CC lib/idxd/idxd_kernel.o 00:02:48.804 CC lib/env_dpdk/init.o 00:02:48.804 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.804 CC lib/json/json_util.o 00:02:48.804 CC lib/env_dpdk/threads.o 00:02:48.804 CC lib/env_dpdk/pci_ioat.o 00:02:48.804 CC lib/json/json_write.o 00:02:48.804 CC lib/env_dpdk/pci_virtio.o 00:02:48.804 CC lib/env_dpdk/pci_vmd.o 00:02:48.804 CC lib/env_dpdk/pci_idxd.o 00:02:48.804 CC lib/env_dpdk/pci_event.o 00:02:48.804 CC lib/env_dpdk/sigbus_handler.o 00:02:48.804 CC lib/env_dpdk/pci_dpdk.o 00:02:48.804 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.804 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.804 LIB libspdk_trace_parser.a 00:02:48.804 SO libspdk_trace_parser.so.5.0 00:02:48.804 SYMLINK libspdk_trace_parser.so 00:02:48.804 LIB libspdk_rdma_provider.a 00:02:49.062 SO libspdk_rdma_provider.so.6.0 00:02:49.062 SYMLINK libspdk_rdma_provider.so 00:02:49.062 LIB libspdk_rdma_utils.a 00:02:49.062 LIB libspdk_json.a 00:02:49.062 SO libspdk_rdma_utils.so.1.0 00:02:49.062 LIB libspdk_conf.a 00:02:49.062 SO libspdk_json.so.6.0 00:02:49.062 SO libspdk_conf.so.6.0 00:02:49.062 SYMLINK libspdk_rdma_utils.so 00:02:49.062 SYMLINK libspdk_conf.so 00:02:49.062 SYMLINK libspdk_json.so 00:02:49.320 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.320 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.320 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.320 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.320 LIB libspdk_idxd.a 00:02:49.320 LIB libspdk_vmd.a 00:02:49.320 SO libspdk_idxd.so.12.0 00:02:49.320 SO libspdk_vmd.so.6.0 00:02:49.320 SYMLINK libspdk_idxd.so 00:02:49.578 SYMLINK libspdk_vmd.so 00:02:49.578 LIB libspdk_jsonrpc.a 00:02:49.578 SO libspdk_jsonrpc.so.6.0 00:02:49.578 SYMLINK libspdk_jsonrpc.so 00:02:49.836 CC lib/rpc/rpc.o 00:02:50.094 LIB libspdk_rpc.a 00:02:50.094 SO libspdk_rpc.so.6.0 00:02:50.094 SYMLINK libspdk_rpc.so 00:02:50.094 LIB libspdk_env_dpdk.a 00:02:50.352 SO libspdk_env_dpdk.so.14.1 00:02:50.352 CC lib/trace/trace.o 00:02:50.352 CC lib/trace/trace_flags.o 00:02:50.352 CC lib/keyring/keyring.o 00:02:50.352 CC lib/notify/notify.o 00:02:50.352 CC lib/trace/trace_rpc.o 00:02:50.352 CC lib/notify/notify_rpc.o 00:02:50.352 CC lib/keyring/keyring_rpc.o 00:02:50.352 SYMLINK libspdk_env_dpdk.so 00:02:50.352 LIB libspdk_notify.a 00:02:50.352 SO libspdk_notify.so.6.0 00:02:50.610 LIB libspdk_keyring.a 00:02:50.610 SYMLINK libspdk_notify.so 00:02:50.610 LIB libspdk_trace.a 00:02:50.610 SO libspdk_keyring.so.1.0 00:02:50.610 SO libspdk_trace.so.10.0 00:02:50.610 SYMLINK libspdk_keyring.so 00:02:50.610 SYMLINK libspdk_trace.so 00:02:50.868 CC lib/sock/sock.o 00:02:50.868 CC lib/sock/sock_rpc.o 00:02:50.868 CC lib/thread/thread.o 00:02:50.868 CC lib/thread/iobuf.o 00:02:51.127 LIB libspdk_sock.a 00:02:51.127 SO libspdk_sock.so.10.0 00:02:51.127 SYMLINK libspdk_sock.so 00:02:51.385 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.385 CC lib/nvme/nvme_ctrlr.o 00:02:51.385 CC lib/nvme/nvme_fabric.o 00:02:51.385 CC lib/nvme/nvme_ns_cmd.o 00:02:51.385 CC lib/nvme/nvme_ns.o 00:02:51.385 CC lib/nvme/nvme_pcie_common.o 00:02:51.385 CC lib/nvme/nvme_pcie.o 00:02:51.385 CC lib/nvme/nvme_qpair.o 00:02:51.385 CC lib/nvme/nvme.o 00:02:51.385 CC lib/nvme/nvme_quirks.o 00:02:51.385 CC lib/nvme/nvme_transport.o 00:02:51.385 CC lib/nvme/nvme_discovery.o 00:02:51.385 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.385 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.385 CC lib/nvme/nvme_tcp.o 00:02:51.385 CC lib/nvme/nvme_opal.o 00:02:51.385 CC lib/nvme/nvme_io_msg.o 00:02:51.385 CC lib/nvme/nvme_poll_group.o 00:02:51.385 CC lib/nvme/nvme_zns.o 00:02:51.385 CC lib/nvme/nvme_stubs.o 00:02:51.385 CC lib/nvme/nvme_auth.o 00:02:51.385 CC lib/nvme/nvme_cuse.o 00:02:51.385 CC lib/nvme/nvme_vfio_user.o 00:02:51.385 CC lib/nvme/nvme_rdma.o 00:02:52.320 LIB libspdk_thread.a 00:02:52.320 SO libspdk_thread.so.10.1 00:02:52.578 SYMLINK libspdk_thread.so 00:02:52.578 CC lib/vfu_tgt/tgt_endpoint.o 00:02:52.578 CC lib/blob/blobstore.o 00:02:52.578 CC lib/init/json_config.o 00:02:52.578 CC lib/init/subsystem.o 00:02:52.578 CC lib/blob/request.o 00:02:52.578 CC lib/vfu_tgt/tgt_rpc.o 00:02:52.578 CC lib/init/subsystem_rpc.o 00:02:52.578 CC lib/accel/accel.o 00:02:52.578 CC lib/blob/zeroes.o 00:02:52.578 CC lib/accel/accel_rpc.o 00:02:52.578 CC lib/init/rpc.o 00:02:52.578 CC lib/virtio/virtio.o 00:02:52.578 CC lib/blob/blob_bs_dev.o 00:02:52.578 CC lib/accel/accel_sw.o 00:02:52.578 CC lib/virtio/virtio_vhost_user.o 00:02:52.578 CC lib/virtio/virtio_vfio_user.o 00:02:52.578 CC lib/virtio/virtio_pci.o 00:02:52.836 LIB libspdk_init.a 00:02:52.836 SO libspdk_init.so.5.0 00:02:53.094 LIB libspdk_virtio.a 00:02:53.094 LIB libspdk_vfu_tgt.a 00:02:53.094 SYMLINK libspdk_init.so 00:02:53.094 SO libspdk_virtio.so.7.0 00:02:53.094 SO libspdk_vfu_tgt.so.3.0 00:02:53.094 SYMLINK libspdk_vfu_tgt.so 00:02:53.094 SYMLINK libspdk_virtio.so 00:02:53.094 CC lib/event/app.o 00:02:53.094 CC lib/event/reactor.o 00:02:53.094 CC lib/event/log_rpc.o 00:02:53.094 CC lib/event/app_rpc.o 00:02:53.094 CC lib/event/scheduler_static.o 00:02:53.661 LIB libspdk_event.a 00:02:53.661 SO libspdk_event.so.14.0 00:02:53.661 LIB libspdk_accel.a 00:02:53.661 SYMLINK libspdk_event.so 00:02:53.661 SO libspdk_accel.so.15.1 00:02:53.661 SYMLINK libspdk_accel.so 00:02:53.920 LIB libspdk_nvme.a 00:02:53.920 CC lib/bdev/bdev.o 00:02:53.920 CC lib/bdev/bdev_rpc.o 00:02:53.920 CC lib/bdev/bdev_zone.o 00:02:53.920 CC lib/bdev/part.o 00:02:53.920 CC lib/bdev/scsi_nvme.o 00:02:53.920 SO libspdk_nvme.so.13.1 00:02:54.178 SYMLINK libspdk_nvme.so 00:02:55.590 LIB libspdk_blob.a 00:02:55.590 SO libspdk_blob.so.11.0 00:02:55.590 SYMLINK libspdk_blob.so 00:02:55.847 CC lib/lvol/lvol.o 00:02:55.847 CC lib/blobfs/blobfs.o 00:02:55.847 CC lib/blobfs/tree.o 00:02:56.410 LIB libspdk_bdev.a 00:02:56.410 SO libspdk_bdev.so.15.1 00:02:56.681 SYMLINK libspdk_bdev.so 00:02:56.681 LIB libspdk_blobfs.a 00:02:56.681 SO libspdk_blobfs.so.10.0 00:02:56.681 SYMLINK libspdk_blobfs.so 00:02:56.681 LIB libspdk_lvol.a 00:02:56.681 SO libspdk_lvol.so.10.0 00:02:56.681 CC lib/ublk/ublk.o 00:02:56.681 CC lib/scsi/dev.o 00:02:56.681 CC lib/nbd/nbd.o 00:02:56.681 CC lib/nvmf/ctrlr.o 00:02:56.681 CC lib/nbd/nbd_rpc.o 00:02:56.681 CC lib/ublk/ublk_rpc.o 00:02:56.681 CC lib/nvmf/ctrlr_discovery.o 00:02:56.681 CC lib/scsi/lun.o 00:02:56.681 CC lib/ftl/ftl_core.o 00:02:56.681 CC lib/ftl/ftl_init.o 00:02:56.681 CC lib/nvmf/ctrlr_bdev.o 00:02:56.681 CC lib/scsi/port.o 00:02:56.681 CC lib/nvmf/subsystem.o 00:02:56.681 CC lib/scsi/scsi.o 00:02:56.681 CC lib/ftl/ftl_layout.o 00:02:56.681 CC lib/ftl/ftl_debug.o 00:02:56.681 CC lib/scsi/scsi_bdev.o 00:02:56.681 CC lib/nvmf/nvmf.o 00:02:56.681 CC lib/scsi/scsi_pr.o 00:02:56.681 CC lib/nvmf/nvmf_rpc.o 00:02:56.681 CC lib/ftl/ftl_io.o 00:02:56.681 CC lib/ftl/ftl_sb.o 00:02:56.681 CC lib/scsi/scsi_rpc.o 00:02:56.681 CC lib/nvmf/transport.o 00:02:56.681 CC lib/nvmf/tcp.o 00:02:56.681 CC lib/scsi/task.o 00:02:56.681 CC lib/ftl/ftl_l2p.o 00:02:56.681 CC lib/ftl/ftl_l2p_flat.o 00:02:56.681 CC lib/nvmf/stubs.o 00:02:56.681 CC lib/ftl/ftl_nv_cache.o 00:02:56.681 CC lib/ftl/ftl_band.o 00:02:56.681 CC lib/nvmf/mdns_server.o 00:02:56.681 CC lib/nvmf/vfio_user.o 00:02:56.681 CC lib/ftl/ftl_band_ops.o 00:02:56.681 CC lib/nvmf/rdma.o 00:02:56.681 CC lib/ftl/ftl_writer.o 00:02:56.681 CC lib/nvmf/auth.o 00:02:56.681 CC lib/ftl/ftl_rq.o 00:02:56.681 CC lib/ftl/ftl_reloc.o 00:02:56.681 CC lib/ftl/ftl_l2p_cache.o 00:02:56.681 CC lib/ftl/ftl_p2l.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.681 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.939 SYMLINK libspdk_lvol.so 00:02:56.939 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:57.202 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:57.202 CC lib/ftl/utils/ftl_conf.o 00:02:57.202 CC lib/ftl/utils/ftl_md.o 00:02:57.202 CC lib/ftl/utils/ftl_mempool.o 00:02:57.202 CC lib/ftl/utils/ftl_bitmap.o 00:02:57.202 CC lib/ftl/utils/ftl_property.o 00:02:57.202 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:57.202 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:57.202 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:57.462 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:57.462 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:57.462 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:57.462 CC lib/ftl/base/ftl_base_dev.o 00:02:57.462 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.462 CC lib/ftl/ftl_trace.o 00:02:57.462 LIB libspdk_nbd.a 00:02:57.720 SO libspdk_nbd.so.7.0 00:02:57.720 SYMLINK libspdk_nbd.so 00:02:57.720 LIB libspdk_scsi.a 00:02:57.720 SO libspdk_scsi.so.9.0 00:02:57.720 LIB libspdk_ublk.a 00:02:57.979 SYMLINK libspdk_scsi.so 00:02:57.979 SO libspdk_ublk.so.3.0 00:02:57.979 SYMLINK libspdk_ublk.so 00:02:57.979 CC lib/vhost/vhost.o 00:02:57.979 CC lib/iscsi/conn.o 00:02:57.979 CC lib/vhost/vhost_rpc.o 00:02:57.979 CC lib/vhost/vhost_scsi.o 00:02:57.979 CC lib/iscsi/init_grp.o 00:02:57.979 CC lib/vhost/vhost_blk.o 00:02:57.979 CC lib/iscsi/iscsi.o 00:02:57.979 CC lib/iscsi/md5.o 00:02:57.979 CC lib/vhost/rte_vhost_user.o 00:02:57.979 CC lib/iscsi/param.o 00:02:57.979 CC lib/iscsi/portal_grp.o 00:02:57.979 CC lib/iscsi/tgt_node.o 00:02:57.979 CC lib/iscsi/iscsi_subsystem.o 00:02:57.979 CC lib/iscsi/iscsi_rpc.o 00:02:57.979 CC lib/iscsi/task.o 00:02:58.237 LIB libspdk_ftl.a 00:02:58.494 SO libspdk_ftl.so.9.0 00:02:58.752 SYMLINK libspdk_ftl.so 00:02:59.318 LIB libspdk_vhost.a 00:02:59.318 SO libspdk_vhost.so.8.0 00:02:59.318 SYMLINK libspdk_vhost.so 00:02:59.318 LIB libspdk_nvmf.a 00:02:59.577 LIB libspdk_iscsi.a 00:02:59.577 SO libspdk_nvmf.so.18.1 00:02:59.577 SO libspdk_iscsi.so.8.0 00:02:59.577 SYMLINK libspdk_nvmf.so 00:02:59.577 SYMLINK libspdk_iscsi.so 00:02:59.836 CC module/vfu_device/vfu_virtio.o 00:02:59.836 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.836 CC module/vfu_device/vfu_virtio_blk.o 00:02:59.836 CC module/vfu_device/vfu_virtio_scsi.o 00:02:59.836 CC module/vfu_device/vfu_virtio_rpc.o 00:03:00.094 CC module/accel/error/accel_error.o 00:03:00.094 CC module/accel/error/accel_error_rpc.o 00:03:00.094 CC module/blob/bdev/blob_bdev.o 00:03:00.094 CC module/scheduler/gscheduler/gscheduler.o 00:03:00.094 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.094 CC module/accel/ioat/accel_ioat.o 00:03:00.094 CC module/accel/ioat/accel_ioat_rpc.o 00:03:00.094 CC module/sock/posix/posix.o 00:03:00.094 CC module/keyring/linux/keyring.o 00:03:00.094 CC module/keyring/linux/keyring_rpc.o 00:03:00.094 CC module/accel/iaa/accel_iaa.o 00:03:00.094 CC module/accel/iaa/accel_iaa_rpc.o 00:03:00.094 CC module/accel/dsa/accel_dsa.o 00:03:00.094 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:00.094 CC module/accel/dsa/accel_dsa_rpc.o 00:03:00.094 CC module/keyring/file/keyring_rpc.o 00:03:00.094 CC module/keyring/file/keyring.o 00:03:00.094 LIB libspdk_env_dpdk_rpc.a 00:03:00.094 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.094 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.094 LIB libspdk_keyring_linux.a 00:03:00.094 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.094 LIB libspdk_keyring_file.a 00:03:00.353 SO libspdk_keyring_linux.so.1.0 00:03:00.353 LIB libspdk_accel_error.a 00:03:00.353 LIB libspdk_scheduler_gscheduler.a 00:03:00.353 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:00.353 SO libspdk_keyring_file.so.1.0 00:03:00.353 LIB libspdk_accel_ioat.a 00:03:00.353 LIB libspdk_scheduler_dynamic.a 00:03:00.353 SO libspdk_scheduler_gscheduler.so.4.0 00:03:00.353 LIB libspdk_accel_iaa.a 00:03:00.353 SO libspdk_accel_error.so.2.0 00:03:00.353 SO libspdk_accel_ioat.so.6.0 00:03:00.353 SO libspdk_scheduler_dynamic.so.4.0 00:03:00.353 SYMLINK libspdk_keyring_linux.so 00:03:00.353 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:00.353 SO libspdk_accel_iaa.so.3.0 00:03:00.353 SYMLINK libspdk_keyring_file.so 00:03:00.353 SYMLINK libspdk_scheduler_gscheduler.so 00:03:00.353 SYMLINK libspdk_accel_error.so 00:03:00.353 LIB libspdk_accel_dsa.a 00:03:00.353 LIB libspdk_blob_bdev.a 00:03:00.353 SYMLINK libspdk_accel_ioat.so 00:03:00.353 SYMLINK libspdk_scheduler_dynamic.so 00:03:00.353 SYMLINK libspdk_accel_iaa.so 00:03:00.353 SO libspdk_blob_bdev.so.11.0 00:03:00.353 SO libspdk_accel_dsa.so.5.0 00:03:00.353 SYMLINK libspdk_blob_bdev.so 00:03:00.353 SYMLINK libspdk_accel_dsa.so 00:03:00.611 LIB libspdk_vfu_device.a 00:03:00.611 SO libspdk_vfu_device.so.3.0 00:03:00.611 CC module/bdev/null/bdev_null.o 00:03:00.611 CC module/bdev/raid/bdev_raid.o 00:03:00.611 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.611 CC module/bdev/malloc/bdev_malloc.o 00:03:00.611 CC module/bdev/gpt/gpt.o 00:03:00.611 CC module/bdev/null/bdev_null_rpc.o 00:03:00.611 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.611 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.611 CC module/bdev/delay/vbdev_delay.o 00:03:00.611 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.611 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.611 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.611 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.611 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.611 CC module/bdev/ftl/bdev_ftl.o 00:03:00.611 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.611 CC module/bdev/nvme/bdev_nvme.o 00:03:00.611 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.611 CC module/bdev/error/vbdev_error.o 00:03:00.611 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.611 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.611 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.611 CC module/bdev/split/vbdev_split.o 00:03:00.611 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.611 CC module/bdev/nvme/nvme_rpc.o 00:03:00.611 CC module/bdev/raid/raid0.o 00:03:00.611 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.611 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.611 CC module/bdev/raid/raid1.o 00:03:00.611 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.611 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.611 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.611 CC module/bdev/raid/concat.o 00:03:00.611 CC module/bdev/nvme/vbdev_opal.o 00:03:00.611 CC module/bdev/aio/bdev_aio.o 00:03:00.611 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.611 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.611 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.611 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.611 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.611 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.611 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.869 SYMLINK libspdk_vfu_device.so 00:03:00.869 LIB libspdk_sock_posix.a 00:03:00.869 SO libspdk_sock_posix.so.6.0 00:03:01.128 SYMLINK libspdk_sock_posix.so 00:03:01.128 LIB libspdk_blobfs_bdev.a 00:03:01.128 SO libspdk_blobfs_bdev.so.6.0 00:03:01.128 LIB libspdk_bdev_ftl.a 00:03:01.128 LIB libspdk_bdev_split.a 00:03:01.128 LIB libspdk_bdev_gpt.a 00:03:01.128 SO libspdk_bdev_ftl.so.6.0 00:03:01.128 SYMLINK libspdk_blobfs_bdev.so 00:03:01.128 SO libspdk_bdev_split.so.6.0 00:03:01.128 SO libspdk_bdev_gpt.so.6.0 00:03:01.128 LIB libspdk_bdev_null.a 00:03:01.128 LIB libspdk_bdev_error.a 00:03:01.128 LIB libspdk_bdev_passthru.a 00:03:01.128 SO libspdk_bdev_null.so.6.0 00:03:01.128 SYMLINK libspdk_bdev_ftl.so 00:03:01.128 SYMLINK libspdk_bdev_split.so 00:03:01.128 SYMLINK libspdk_bdev_gpt.so 00:03:01.128 SO libspdk_bdev_error.so.6.0 00:03:01.128 SO libspdk_bdev_passthru.so.6.0 00:03:01.128 LIB libspdk_bdev_malloc.a 00:03:01.128 LIB libspdk_bdev_zone_block.a 00:03:01.128 SYMLINK libspdk_bdev_null.so 00:03:01.386 LIB libspdk_bdev_iscsi.a 00:03:01.386 SO libspdk_bdev_malloc.so.6.0 00:03:01.386 SYMLINK libspdk_bdev_error.so 00:03:01.386 SYMLINK libspdk_bdev_passthru.so 00:03:01.386 SO libspdk_bdev_zone_block.so.6.0 00:03:01.386 SO libspdk_bdev_iscsi.so.6.0 00:03:01.386 LIB libspdk_bdev_delay.a 00:03:01.386 LIB libspdk_bdev_aio.a 00:03:01.386 SO libspdk_bdev_aio.so.6.0 00:03:01.386 SO libspdk_bdev_delay.so.6.0 00:03:01.386 SYMLINK libspdk_bdev_malloc.so 00:03:01.386 SYMLINK libspdk_bdev_zone_block.so 00:03:01.386 SYMLINK libspdk_bdev_iscsi.so 00:03:01.386 SYMLINK libspdk_bdev_delay.so 00:03:01.386 SYMLINK libspdk_bdev_aio.so 00:03:01.386 LIB libspdk_bdev_virtio.a 00:03:01.386 SO libspdk_bdev_virtio.so.6.0 00:03:01.386 LIB libspdk_bdev_lvol.a 00:03:01.386 SO libspdk_bdev_lvol.so.6.0 00:03:01.386 SYMLINK libspdk_bdev_virtio.so 00:03:01.645 SYMLINK libspdk_bdev_lvol.so 00:03:01.645 LIB libspdk_bdev_raid.a 00:03:01.645 SO libspdk_bdev_raid.so.6.0 00:03:01.902 SYMLINK libspdk_bdev_raid.so 00:03:03.277 LIB libspdk_bdev_nvme.a 00:03:03.277 SO libspdk_bdev_nvme.so.7.0 00:03:03.277 SYMLINK libspdk_bdev_nvme.so 00:03:03.535 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:03.535 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.535 CC module/event/subsystems/vmd/vmd.o 00:03:03.535 CC module/event/subsystems/sock/sock.o 00:03:03.535 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.535 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.535 CC module/event/subsystems/keyring/keyring.o 00:03:03.535 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.535 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.793 LIB libspdk_event_keyring.a 00:03:03.793 LIB libspdk_event_vhost_blk.a 00:03:03.793 LIB libspdk_event_vfu_tgt.a 00:03:03.793 LIB libspdk_event_sock.a 00:03:03.793 LIB libspdk_event_scheduler.a 00:03:03.793 LIB libspdk_event_vmd.a 00:03:03.793 SO libspdk_event_keyring.so.1.0 00:03:03.793 SO libspdk_event_vhost_blk.so.3.0 00:03:03.793 SO libspdk_event_vfu_tgt.so.3.0 00:03:03.793 SO libspdk_event_sock.so.5.0 00:03:03.793 LIB libspdk_event_iobuf.a 00:03:03.793 SO libspdk_event_scheduler.so.4.0 00:03:03.793 SO libspdk_event_vmd.so.6.0 00:03:03.793 SO libspdk_event_iobuf.so.3.0 00:03:03.793 SYMLINK libspdk_event_keyring.so 00:03:03.793 SYMLINK libspdk_event_vhost_blk.so 00:03:03.793 SYMLINK libspdk_event_vfu_tgt.so 00:03:03.793 SYMLINK libspdk_event_sock.so 00:03:03.793 SYMLINK libspdk_event_scheduler.so 00:03:03.793 SYMLINK libspdk_event_vmd.so 00:03:03.793 SYMLINK libspdk_event_iobuf.so 00:03:04.050 CC module/event/subsystems/accel/accel.o 00:03:04.050 LIB libspdk_event_accel.a 00:03:04.309 SO libspdk_event_accel.so.6.0 00:03:04.309 SYMLINK libspdk_event_accel.so 00:03:04.309 CC module/event/subsystems/bdev/bdev.o 00:03:04.568 LIB libspdk_event_bdev.a 00:03:04.568 SO libspdk_event_bdev.so.6.0 00:03:04.568 SYMLINK libspdk_event_bdev.so 00:03:04.826 CC module/event/subsystems/ublk/ublk.o 00:03:04.826 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.826 CC module/event/subsystems/nbd/nbd.o 00:03:04.826 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.826 CC module/event/subsystems/scsi/scsi.o 00:03:05.085 LIB libspdk_event_nbd.a 00:03:05.085 LIB libspdk_event_ublk.a 00:03:05.085 LIB libspdk_event_scsi.a 00:03:05.085 SO libspdk_event_ublk.so.3.0 00:03:05.085 SO libspdk_event_nbd.so.6.0 00:03:05.085 SO libspdk_event_scsi.so.6.0 00:03:05.085 SYMLINK libspdk_event_nbd.so 00:03:05.085 SYMLINK libspdk_event_ublk.so 00:03:05.085 SYMLINK libspdk_event_scsi.so 00:03:05.085 LIB libspdk_event_nvmf.a 00:03:05.085 SO libspdk_event_nvmf.so.6.0 00:03:05.085 SYMLINK libspdk_event_nvmf.so 00:03:05.344 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.344 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.344 LIB libspdk_event_vhost_scsi.a 00:03:05.344 LIB libspdk_event_iscsi.a 00:03:05.344 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.344 SO libspdk_event_iscsi.so.6.0 00:03:05.344 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.604 SYMLINK libspdk_event_iscsi.so 00:03:05.604 SO libspdk.so.6.0 00:03:05.604 SYMLINK libspdk.so 00:03:05.868 CC app/trace_record/trace_record.o 00:03:05.868 CXX app/trace/trace.o 00:03:05.868 CC test/rpc_client/rpc_client_test.o 00:03:05.868 CC app/spdk_nvme_perf/perf.o 00:03:05.868 CC app/spdk_top/spdk_top.o 00:03:05.868 CC app/spdk_lspci/spdk_lspci.o 00:03:05.868 CC app/spdk_nvme_discover/discovery_aer.o 00:03:05.868 CC app/spdk_nvme_identify/identify.o 00:03:05.868 TEST_HEADER include/spdk/accel.h 00:03:05.868 TEST_HEADER include/spdk/accel_module.h 00:03:05.868 TEST_HEADER include/spdk/barrier.h 00:03:05.868 TEST_HEADER include/spdk/assert.h 00:03:05.868 TEST_HEADER include/spdk/base64.h 00:03:05.868 TEST_HEADER include/spdk/bdev.h 00:03:05.868 TEST_HEADER include/spdk/bdev_module.h 00:03:05.868 TEST_HEADER include/spdk/bdev_zone.h 00:03:05.868 TEST_HEADER include/spdk/bit_array.h 00:03:05.868 TEST_HEADER include/spdk/bit_pool.h 00:03:05.868 TEST_HEADER include/spdk/blob_bdev.h 00:03:05.868 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:05.868 TEST_HEADER include/spdk/blobfs.h 00:03:05.868 TEST_HEADER include/spdk/blob.h 00:03:05.868 TEST_HEADER include/spdk/conf.h 00:03:05.868 TEST_HEADER include/spdk/config.h 00:03:05.869 TEST_HEADER include/spdk/cpuset.h 00:03:05.869 TEST_HEADER include/spdk/crc16.h 00:03:05.869 TEST_HEADER include/spdk/crc64.h 00:03:05.869 TEST_HEADER include/spdk/crc32.h 00:03:05.869 TEST_HEADER include/spdk/dif.h 00:03:05.869 TEST_HEADER include/spdk/dma.h 00:03:05.869 TEST_HEADER include/spdk/endian.h 00:03:05.869 TEST_HEADER include/spdk/env_dpdk.h 00:03:05.869 TEST_HEADER include/spdk/env.h 00:03:05.869 TEST_HEADER include/spdk/event.h 00:03:05.869 TEST_HEADER include/spdk/fd_group.h 00:03:05.869 TEST_HEADER include/spdk/fd.h 00:03:05.869 TEST_HEADER include/spdk/ftl.h 00:03:05.869 TEST_HEADER include/spdk/file.h 00:03:05.869 TEST_HEADER include/spdk/gpt_spec.h 00:03:05.869 TEST_HEADER include/spdk/histogram_data.h 00:03:05.869 TEST_HEADER include/spdk/hexlify.h 00:03:05.869 TEST_HEADER include/spdk/idxd.h 00:03:05.869 TEST_HEADER include/spdk/idxd_spec.h 00:03:05.869 TEST_HEADER include/spdk/init.h 00:03:05.869 TEST_HEADER include/spdk/ioat.h 00:03:05.869 TEST_HEADER include/spdk/ioat_spec.h 00:03:05.869 TEST_HEADER include/spdk/iscsi_spec.h 00:03:05.869 TEST_HEADER include/spdk/json.h 00:03:05.869 TEST_HEADER include/spdk/jsonrpc.h 00:03:05.869 TEST_HEADER include/spdk/keyring.h 00:03:05.869 TEST_HEADER include/spdk/keyring_module.h 00:03:05.869 TEST_HEADER include/spdk/likely.h 00:03:05.869 TEST_HEADER include/spdk/log.h 00:03:05.869 TEST_HEADER include/spdk/memory.h 00:03:05.869 TEST_HEADER include/spdk/lvol.h 00:03:05.869 TEST_HEADER include/spdk/mmio.h 00:03:05.869 TEST_HEADER include/spdk/nbd.h 00:03:05.869 TEST_HEADER include/spdk/notify.h 00:03:05.869 TEST_HEADER include/spdk/nvme.h 00:03:05.869 TEST_HEADER include/spdk/nvme_intel.h 00:03:05.869 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:05.869 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:05.869 TEST_HEADER include/spdk/nvme_spec.h 00:03:05.869 TEST_HEADER include/spdk/nvme_zns.h 00:03:05.869 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:05.869 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:05.869 TEST_HEADER include/spdk/nvmf.h 00:03:05.869 TEST_HEADER include/spdk/nvmf_spec.h 00:03:05.869 TEST_HEADER include/spdk/nvmf_transport.h 00:03:05.869 TEST_HEADER include/spdk/opal.h 00:03:05.869 TEST_HEADER include/spdk/opal_spec.h 00:03:05.869 TEST_HEADER include/spdk/pci_ids.h 00:03:05.869 TEST_HEADER include/spdk/pipe.h 00:03:05.869 TEST_HEADER include/spdk/queue.h 00:03:05.869 TEST_HEADER include/spdk/reduce.h 00:03:05.869 TEST_HEADER include/spdk/rpc.h 00:03:05.869 TEST_HEADER include/spdk/scheduler.h 00:03:05.869 TEST_HEADER include/spdk/scsi.h 00:03:05.869 TEST_HEADER include/spdk/scsi_spec.h 00:03:05.869 TEST_HEADER include/spdk/sock.h 00:03:05.869 TEST_HEADER include/spdk/string.h 00:03:05.869 TEST_HEADER include/spdk/stdinc.h 00:03:05.869 TEST_HEADER include/spdk/thread.h 00:03:05.869 TEST_HEADER include/spdk/trace.h 00:03:05.869 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:05.869 TEST_HEADER include/spdk/trace_parser.h 00:03:05.869 TEST_HEADER include/spdk/tree.h 00:03:05.869 TEST_HEADER include/spdk/ublk.h 00:03:05.869 TEST_HEADER include/spdk/util.h 00:03:05.869 TEST_HEADER include/spdk/uuid.h 00:03:05.869 TEST_HEADER include/spdk/version.h 00:03:05.869 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:05.869 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:05.869 TEST_HEADER include/spdk/vhost.h 00:03:05.869 TEST_HEADER include/spdk/vmd.h 00:03:05.869 TEST_HEADER include/spdk/xor.h 00:03:05.869 TEST_HEADER include/spdk/zipf.h 00:03:05.869 CXX test/cpp_headers/accel.o 00:03:05.869 CXX test/cpp_headers/accel_module.o 00:03:05.869 CXX test/cpp_headers/assert.o 00:03:05.869 CXX test/cpp_headers/barrier.o 00:03:05.869 CXX test/cpp_headers/base64.o 00:03:05.869 CXX test/cpp_headers/bdev.o 00:03:05.869 CXX test/cpp_headers/bdev_module.o 00:03:05.869 CXX test/cpp_headers/bdev_zone.o 00:03:05.869 CC app/spdk_dd/spdk_dd.o 00:03:05.869 CXX test/cpp_headers/bit_array.o 00:03:05.869 CXX test/cpp_headers/bit_pool.o 00:03:05.869 CXX test/cpp_headers/blob_bdev.o 00:03:05.869 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.869 CXX test/cpp_headers/blobfs.o 00:03:05.869 CXX test/cpp_headers/blob.o 00:03:05.869 CXX test/cpp_headers/conf.o 00:03:05.869 CXX test/cpp_headers/config.o 00:03:05.869 CC app/nvmf_tgt/nvmf_main.o 00:03:05.869 CC app/iscsi_tgt/iscsi_tgt.o 00:03:05.869 CXX test/cpp_headers/cpuset.o 00:03:05.869 CXX test/cpp_headers/crc16.o 00:03:05.869 CC app/spdk_tgt/spdk_tgt.o 00:03:05.869 CC examples/util/zipf/zipf.o 00:03:05.869 CXX test/cpp_headers/crc32.o 00:03:05.869 CC test/thread/poller_perf/poller_perf.o 00:03:05.869 CC examples/ioat/perf/perf.o 00:03:05.869 CC examples/ioat/verify/verify.o 00:03:05.869 CC test/env/pci/pci_ut.o 00:03:05.869 CC app/fio/nvme/fio_plugin.o 00:03:05.869 CC test/app/stub/stub.o 00:03:05.869 CC test/app/jsoncat/jsoncat.o 00:03:05.869 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.869 CC test/app/histogram_perf/histogram_perf.o 00:03:05.869 CC test/env/vtophys/vtophys.o 00:03:05.869 CC test/env/memory/memory_ut.o 00:03:06.128 CC test/app/bdev_svc/bdev_svc.o 00:03:06.128 CC test/dma/test_dma/test_dma.o 00:03:06.128 CC app/fio/bdev/fio_plugin.o 00:03:06.128 LINK spdk_lspci 00:03:06.128 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.128 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:06.128 LINK rpc_client_test 00:03:06.128 LINK spdk_nvme_discover 00:03:06.128 LINK poller_perf 00:03:06.128 LINK zipf 00:03:06.391 CXX test/cpp_headers/crc64.o 00:03:06.391 LINK vtophys 00:03:06.391 CXX test/cpp_headers/dif.o 00:03:06.391 LINK jsoncat 00:03:06.391 CXX test/cpp_headers/dma.o 00:03:06.391 LINK nvmf_tgt 00:03:06.391 CXX test/cpp_headers/endian.o 00:03:06.391 LINK interrupt_tgt 00:03:06.391 LINK histogram_perf 00:03:06.391 CXX test/cpp_headers/env_dpdk.o 00:03:06.391 LINK env_dpdk_post_init 00:03:06.391 CXX test/cpp_headers/env.o 00:03:06.391 CXX test/cpp_headers/event.o 00:03:06.391 CXX test/cpp_headers/fd_group.o 00:03:06.391 CXX test/cpp_headers/fd.o 00:03:06.391 CXX test/cpp_headers/file.o 00:03:06.391 LINK spdk_trace_record 00:03:06.391 CXX test/cpp_headers/ftl.o 00:03:06.391 LINK iscsi_tgt 00:03:06.391 CXX test/cpp_headers/gpt_spec.o 00:03:06.391 LINK stub 00:03:06.391 CXX test/cpp_headers/hexlify.o 00:03:06.391 LINK spdk_tgt 00:03:06.391 CXX test/cpp_headers/histogram_data.o 00:03:06.391 CXX test/cpp_headers/idxd.o 00:03:06.391 LINK ioat_perf 00:03:06.391 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:06.391 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:06.391 CXX test/cpp_headers/idxd_spec.o 00:03:06.391 LINK bdev_svc 00:03:06.391 LINK verify 00:03:06.391 CXX test/cpp_headers/init.o 00:03:06.391 CXX test/cpp_headers/ioat.o 00:03:06.391 CXX test/cpp_headers/ioat_spec.o 00:03:06.651 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.651 CXX test/cpp_headers/iscsi_spec.o 00:03:06.651 CXX test/cpp_headers/json.o 00:03:06.651 CXX test/cpp_headers/jsonrpc.o 00:03:06.651 CXX test/cpp_headers/keyring.o 00:03:06.651 LINK spdk_dd 00:03:06.651 CXX test/cpp_headers/keyring_module.o 00:03:06.651 CXX test/cpp_headers/likely.o 00:03:06.651 CXX test/cpp_headers/log.o 00:03:06.651 CXX test/cpp_headers/lvol.o 00:03:06.651 CXX test/cpp_headers/memory.o 00:03:06.651 CXX test/cpp_headers/mmio.o 00:03:06.651 CXX test/cpp_headers/nbd.o 00:03:06.651 LINK pci_ut 00:03:06.651 LINK spdk_trace 00:03:06.651 CXX test/cpp_headers/notify.o 00:03:06.651 CXX test/cpp_headers/nvme.o 00:03:06.651 CXX test/cpp_headers/nvme_intel.o 00:03:06.651 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.651 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.651 CXX test/cpp_headers/nvme_spec.o 00:03:06.651 CXX test/cpp_headers/nvme_zns.o 00:03:06.651 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.651 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.651 CXX test/cpp_headers/nvmf.o 00:03:06.914 CXX test/cpp_headers/nvmf_spec.o 00:03:06.914 CXX test/cpp_headers/nvmf_transport.o 00:03:06.914 LINK test_dma 00:03:06.914 CXX test/cpp_headers/opal.o 00:03:06.914 CXX test/cpp_headers/opal_spec.o 00:03:06.914 CXX test/cpp_headers/pci_ids.o 00:03:06.914 LINK nvme_fuzz 00:03:06.914 CC test/event/reactor/reactor.o 00:03:06.914 CXX test/cpp_headers/pipe.o 00:03:06.914 CC test/event/event_perf/event_perf.o 00:03:06.914 CC test/event/reactor_perf/reactor_perf.o 00:03:06.914 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.914 CXX test/cpp_headers/reduce.o 00:03:06.914 CXX test/cpp_headers/queue.o 00:03:06.914 CC examples/sock/hello_world/hello_sock.o 00:03:06.914 CC examples/idxd/perf/perf.o 00:03:06.914 CC test/event/app_repeat/app_repeat.o 00:03:07.173 CXX test/cpp_headers/rpc.o 00:03:07.173 CXX test/cpp_headers/scheduler.o 00:03:07.173 LINK spdk_bdev 00:03:07.173 CC examples/vmd/led/led.o 00:03:07.173 CC examples/thread/thread/thread_ex.o 00:03:07.173 CXX test/cpp_headers/scsi.o 00:03:07.173 CXX test/cpp_headers/scsi_spec.o 00:03:07.173 LINK spdk_nvme 00:03:07.173 CXX test/cpp_headers/sock.o 00:03:07.173 CXX test/cpp_headers/stdinc.o 00:03:07.173 CXX test/cpp_headers/string.o 00:03:07.173 CC test/event/scheduler/scheduler.o 00:03:07.173 CXX test/cpp_headers/thread.o 00:03:07.173 CXX test/cpp_headers/trace.o 00:03:07.173 CXX test/cpp_headers/trace_parser.o 00:03:07.173 CXX test/cpp_headers/tree.o 00:03:07.173 CXX test/cpp_headers/ublk.o 00:03:07.173 CXX test/cpp_headers/util.o 00:03:07.173 CXX test/cpp_headers/uuid.o 00:03:07.173 CXX test/cpp_headers/version.o 00:03:07.173 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.173 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.173 CXX test/cpp_headers/vhost.o 00:03:07.173 CXX test/cpp_headers/vmd.o 00:03:07.173 CXX test/cpp_headers/xor.o 00:03:07.173 CXX test/cpp_headers/zipf.o 00:03:07.173 LINK reactor 00:03:07.173 LINK mem_callbacks 00:03:07.173 CC app/vhost/vhost.o 00:03:07.438 LINK lsvmd 00:03:07.438 LINK reactor_perf 00:03:07.438 LINK event_perf 00:03:07.438 LINK vhost_fuzz 00:03:07.438 LINK spdk_nvme_identify 00:03:07.438 LINK app_repeat 00:03:07.438 LINK spdk_nvme_perf 00:03:07.438 LINK led 00:03:07.438 LINK spdk_top 00:03:07.438 LINK hello_sock 00:03:07.438 CC test/nvme/e2edp/nvme_dp.o 00:03:07.438 CC test/nvme/reserve/reserve.o 00:03:07.438 CC test/nvme/err_injection/err_injection.o 00:03:07.438 CC test/nvme/overhead/overhead.o 00:03:07.438 CC test/nvme/simple_copy/simple_copy.o 00:03:07.438 CC test/nvme/aer/aer.o 00:03:07.438 CC test/nvme/sgl/sgl.o 00:03:07.438 CC test/nvme/startup/startup.o 00:03:07.438 CC test/nvme/reset/reset.o 00:03:07.697 CC test/accel/dif/dif.o 00:03:07.697 LINK thread 00:03:07.697 CC test/blobfs/mkfs/mkfs.o 00:03:07.697 CC test/nvme/boot_partition/boot_partition.o 00:03:07.697 CC test/nvme/connect_stress/connect_stress.o 00:03:07.697 CC test/nvme/compliance/nvme_compliance.o 00:03:07.697 LINK scheduler 00:03:07.698 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.698 CC test/lvol/esnap/esnap.o 00:03:07.698 CC test/nvme/fdp/fdp.o 00:03:07.698 CC test/nvme/cuse/cuse.o 00:03:07.698 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.698 LINK vhost 00:03:07.698 LINK idxd_perf 00:03:07.698 LINK startup 00:03:08.006 LINK connect_stress 00:03:08.006 LINK reserve 00:03:08.006 LINK err_injection 00:03:08.006 LINK doorbell_aers 00:03:08.006 LINK mkfs 00:03:08.006 LINK boot_partition 00:03:08.006 LINK simple_copy 00:03:08.006 LINK aer 00:03:08.006 CC examples/nvme/abort/abort.o 00:03:08.006 CC examples/nvme/hello_world/hello_world.o 00:03:08.006 LINK nvme_dp 00:03:08.006 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:08.006 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:08.006 CC examples/nvme/arbitration/arbitration.o 00:03:08.006 CC examples/nvme/hotplug/hotplug.o 00:03:08.006 LINK sgl 00:03:08.006 CC examples/nvme/reconnect/reconnect.o 00:03:08.006 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:08.006 LINK memory_ut 00:03:08.006 LINK overhead 00:03:08.006 LINK fused_ordering 00:03:08.006 LINK reset 00:03:08.006 LINK nvme_compliance 00:03:08.006 LINK fdp 00:03:08.264 CC examples/accel/perf/accel_perf.o 00:03:08.264 CC examples/blob/hello_world/hello_blob.o 00:03:08.264 CC examples/blob/cli/blobcli.o 00:03:08.264 LINK cmb_copy 00:03:08.264 LINK dif 00:03:08.264 LINK hello_world 00:03:08.264 LINK hotplug 00:03:08.264 LINK pmr_persistence 00:03:08.264 LINK abort 00:03:08.522 LINK arbitration 00:03:08.522 LINK reconnect 00:03:08.522 LINK hello_blob 00:03:08.522 LINK nvme_manage 00:03:08.522 LINK accel_perf 00:03:08.780 CC test/bdev/bdevio/bdevio.o 00:03:08.780 LINK blobcli 00:03:08.780 LINK iscsi_fuzz 00:03:09.037 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.037 CC examples/bdev/bdevperf/bdevperf.o 00:03:09.037 LINK bdevio 00:03:09.295 LINK cuse 00:03:09.295 LINK hello_bdev 00:03:09.860 LINK bdevperf 00:03:10.119 CC examples/nvmf/nvmf/nvmf.o 00:03:10.377 LINK nvmf 00:03:12.916 LINK esnap 00:03:12.916 00:03:12.916 real 0m41.313s 00:03:12.916 user 7m25.982s 00:03:12.916 sys 1m49.106s 00:03:12.916 13:11:10 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:12.916 13:11:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:12.916 ************************************ 00:03:12.916 END TEST make 00:03:12.916 ************************************ 00:03:12.916 13:11:10 -- common/autotest_common.sh@1142 -- $ return 0 00:03:12.916 13:11:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:12.916 13:11:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:12.916 13:11:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:12.916 13:11:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.916 13:11:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:12.916 13:11:10 -- pm/common@44 -- $ pid=3340087 00:03:12.916 13:11:10 -- pm/common@50 -- $ kill -TERM 3340087 00:03:12.916 13:11:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.916 13:11:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:12.916 13:11:10 -- pm/common@44 -- $ pid=3340089 00:03:12.916 13:11:10 -- pm/common@50 -- $ kill -TERM 3340089 00:03:12.916 13:11:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.916 13:11:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:12.916 13:11:10 -- pm/common@44 -- $ pid=3340091 00:03:12.916 13:11:10 -- pm/common@50 -- $ kill -TERM 3340091 00:03:12.916 13:11:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.916 13:11:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:12.917 13:11:10 -- pm/common@44 -- $ pid=3340121 00:03:12.917 13:11:10 -- pm/common@50 -- $ sudo -E kill -TERM 3340121 00:03:13.175 13:11:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:13.175 13:11:10 -- nvmf/common.sh@7 -- # uname -s 00:03:13.175 13:11:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:13.175 13:11:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:13.175 13:11:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:13.175 13:11:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:13.175 13:11:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:13.175 13:11:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:13.175 13:11:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:13.175 13:11:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:13.175 13:11:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:13.175 13:11:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:13.175 13:11:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:13.175 13:11:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:13.175 13:11:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:13.175 13:11:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:13.175 13:11:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:13.175 13:11:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:13.175 13:11:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:13.175 13:11:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:13.175 13:11:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.175 13:11:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.175 13:11:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.175 13:11:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.175 13:11:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.175 13:11:10 -- paths/export.sh@5 -- # export PATH 00:03:13.175 13:11:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.175 13:11:10 -- nvmf/common.sh@47 -- # : 0 00:03:13.175 13:11:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:13.175 13:11:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:13.175 13:11:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:13.175 13:11:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:13.175 13:11:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:13.175 13:11:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:13.175 13:11:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:13.175 13:11:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:13.175 13:11:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:13.175 13:11:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:13.175 13:11:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:13.175 13:11:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:13.175 13:11:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:13.175 13:11:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:13.175 13:11:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:13.175 13:11:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:13.175 13:11:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:13.175 13:11:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:13.175 13:11:10 -- spdk/autotest.sh@48 -- # udevadm_pid=3411434 00:03:13.175 13:11:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:13.175 13:11:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:13.175 13:11:10 -- pm/common@17 -- # local monitor 00:03:13.175 13:11:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.175 13:11:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.175 13:11:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.175 13:11:10 -- pm/common@21 -- # date +%s 00:03:13.175 13:11:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.175 13:11:10 -- pm/common@21 -- # date +%s 00:03:13.175 13:11:10 -- pm/common@25 -- # sleep 1 00:03:13.175 13:11:10 -- pm/common@21 -- # date +%s 00:03:13.175 13:11:10 -- pm/common@21 -- # date +%s 00:03:13.175 13:11:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720782670 00:03:13.175 13:11:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720782670 00:03:13.175 13:11:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720782670 00:03:13.175 13:11:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720782670 00:03:13.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720782670_collect-vmstat.pm.log 00:03:13.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720782670_collect-cpu-load.pm.log 00:03:13.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720782670_collect-cpu-temp.pm.log 00:03:13.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720782670_collect-bmc-pm.bmc.pm.log 00:03:14.112 13:11:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:14.112 13:11:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:14.112 13:11:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:14.112 13:11:11 -- common/autotest_common.sh@10 -- # set +x 00:03:14.112 13:11:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:14.112 13:11:11 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:14.112 13:11:11 -- common/autotest_common.sh@10 -- # set +x 00:03:14.112 13:11:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:14.112 13:11:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.112 13:11:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.112 13:11:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:14.112 13:11:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.112 13:11:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:14.112 13:11:11 -- common/autotest_common.sh@1455 -- # uname 00:03:14.112 13:11:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:14.112 13:11:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:14.112 13:11:11 -- common/autotest_common.sh@1475 -- # uname 00:03:14.112 13:11:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:14.112 13:11:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:14.113 13:11:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:14.113 13:11:11 -- spdk/autotest.sh@72 -- # hash lcov 00:03:14.113 13:11:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:14.113 13:11:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:14.113 --rc lcov_branch_coverage=1 00:03:14.113 --rc lcov_function_coverage=1 00:03:14.113 --rc genhtml_branch_coverage=1 00:03:14.113 --rc genhtml_function_coverage=1 00:03:14.113 --rc genhtml_legend=1 00:03:14.113 --rc geninfo_all_blocks=1 00:03:14.113 ' 00:03:14.113 13:11:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:14.113 --rc lcov_branch_coverage=1 00:03:14.113 --rc lcov_function_coverage=1 00:03:14.113 --rc genhtml_branch_coverage=1 00:03:14.113 --rc genhtml_function_coverage=1 00:03:14.113 --rc genhtml_legend=1 00:03:14.113 --rc geninfo_all_blocks=1 00:03:14.113 ' 00:03:14.113 13:11:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:14.113 --rc lcov_branch_coverage=1 00:03:14.113 --rc lcov_function_coverage=1 00:03:14.113 --rc genhtml_branch_coverage=1 00:03:14.113 --rc genhtml_function_coverage=1 00:03:14.113 --rc genhtml_legend=1 00:03:14.113 --rc geninfo_all_blocks=1 00:03:14.113 --no-external' 00:03:14.113 13:11:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:14.113 --rc lcov_branch_coverage=1 00:03:14.113 --rc lcov_function_coverage=1 00:03:14.113 --rc genhtml_branch_coverage=1 00:03:14.113 --rc genhtml_function_coverage=1 00:03:14.113 --rc genhtml_legend=1 00:03:14.113 --rc geninfo_all_blocks=1 00:03:14.113 --no-external' 00:03:14.113 13:11:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:14.113 lcov: LCOV version 1.14 00:03:14.113 13:11:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:20.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:20.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:20.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:42.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.876 13:11:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:47.876 13:11:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.876 13:11:45 -- common/autotest_common.sh@10 -- # set +x 00:03:47.876 13:11:45 -- spdk/autotest.sh@91 -- # rm -f 00:03:47.876 13:11:45 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.251 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:49.251 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:49.251 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:49.251 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:49.251 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:49.251 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:49.251 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:49.251 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:49.251 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:49.251 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:49.251 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:49.251 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:49.251 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:49.251 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:49.251 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:49.251 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:49.251 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:49.509 13:11:46 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:49.509 13:11:46 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.509 13:11:46 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.509 13:11:46 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.509 13:11:46 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.509 13:11:46 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.509 13:11:46 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.509 13:11:46 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.509 13:11:46 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.509 13:11:46 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:49.509 13:11:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:49.509 13:11:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:49.509 13:11:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:49.509 13:11:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:49.509 13:11:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:49.509 No valid GPT data, bailing 00:03:49.509 13:11:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.509 13:11:46 -- scripts/common.sh@391 -- # pt= 00:03:49.509 13:11:46 -- scripts/common.sh@392 -- # return 1 00:03:49.509 13:11:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:49.509 1+0 records in 00:03:49.509 1+0 records out 00:03:49.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00250268 s, 419 MB/s 00:03:49.509 13:11:46 -- spdk/autotest.sh@118 -- # sync 00:03:49.509 13:11:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:49.509 13:11:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:49.509 13:11:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.408 13:11:48 -- spdk/autotest.sh@124 -- # uname -s 00:03:51.408 13:11:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:51.408 13:11:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.408 13:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.408 13:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.408 13:11:48 -- common/autotest_common.sh@10 -- # set +x 00:03:51.408 ************************************ 00:03:51.408 START TEST setup.sh 00:03:51.408 ************************************ 00:03:51.408 13:11:48 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.408 * Looking for test storage... 00:03:51.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.408 13:11:48 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:51.408 13:11:48 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.408 13:11:48 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.408 13:11:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.408 13:11:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.408 13:11:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.408 ************************************ 00:03:51.408 START TEST acl 00:03:51.408 ************************************ 00:03:51.408 13:11:48 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.665 * Looking for test storage... 00:03:51.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.665 13:11:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.665 13:11:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:51.665 13:11:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.665 13:11:48 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.040 13:11:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:53.040 13:11:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:53.040 13:11:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 13:11:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:53.040 13:11:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.040 13:11:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:54.419 Hugepages 00:03:54.419 node hugesize free / total 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 00:03:54.419 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:54.419 13:11:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:54.419 13:11:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.419 13:11:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.419 13:11:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.419 ************************************ 00:03:54.419 START TEST denied 00:03:54.419 ************************************ 00:03:54.419 13:11:51 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:54.419 13:11:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:54.419 13:11:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:54.419 13:11:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:54.419 13:11:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.419 13:11:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.793 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.793 13:11:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.328 00:03:58.328 real 0m4.059s 00:03:58.328 user 0m1.195s 00:03:58.328 sys 0m1.906s 00:03:58.328 13:11:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.328 13:11:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.328 ************************************ 00:03:58.328 END TEST denied 00:03:58.328 ************************************ 00:03:58.328 13:11:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:58.328 13:11:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.328 13:11:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.328 13:11:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.328 13:11:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.328 ************************************ 00:03:58.328 START TEST allowed 00:03:58.328 ************************************ 00:03:58.328 13:11:55 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:58.328 13:11:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:58.328 13:11:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.328 13:11:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:58.328 13:11:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.328 13:11:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.866 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.866 13:11:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:00.866 13:11:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.866 13:11:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.866 13:11:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.866 13:11:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.773 00:04:02.773 real 0m3.996s 00:04:02.773 user 0m1.063s 00:04:02.773 sys 0m1.859s 00:04:02.773 13:11:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.773 13:11:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:02.773 ************************************ 00:04:02.774 END TEST allowed 00:04:02.774 ************************************ 00:04:02.774 13:11:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:02.774 00:04:02.774 real 0m10.901s 00:04:02.774 user 0m3.350s 00:04:02.774 sys 0m5.592s 00:04:02.774 13:11:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.774 13:11:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.774 ************************************ 00:04:02.774 END TEST acl 00:04:02.774 ************************************ 00:04:02.774 13:11:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.774 13:11:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.774 13:11:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.774 13:11:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.774 13:11:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.774 ************************************ 00:04:02.774 START TEST hugepages 00:04:02.774 ************************************ 00:04:02.774 13:11:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.774 * Looking for test storage... 00:04:02.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 37891024 kB' 'MemAvailable: 41471512 kB' 'Buffers: 2704 kB' 'Cached: 15993920 kB' 'SwapCached: 0 kB' 'Active: 12996884 kB' 'Inactive: 3526128 kB' 'Active(anon): 12555124 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529700 kB' 'Mapped: 209780 kB' 'Shmem: 12028736 kB' 'KReclaimable: 212448 kB' 'Slab: 586780 kB' 'SReclaimable: 212448 kB' 'SUnreclaim: 374332 kB' 'KernelStack: 12960 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 13692844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.775 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.776 13:11:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.776 13:11:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.776 13:11:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.776 13:11:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.776 ************************************ 00:04:02.776 START TEST default_setup 00:04:02.776 ************************************ 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.776 13:11:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.711 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.711 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.711 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.711 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.711 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.971 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.971 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.971 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.971 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.934 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39935140 kB' 'MemAvailable: 43515732 kB' 'Buffers: 2704 kB' 'Cached: 15994016 kB' 'SwapCached: 0 kB' 'Active: 13022112 kB' 'Inactive: 3526128 kB' 'Active(anon): 12580352 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554796 kB' 'Mapped: 210720 kB' 'Shmem: 12028832 kB' 'KReclaimable: 212656 kB' 'Slab: 586412 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373756 kB' 'KernelStack: 12960 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.934 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.935 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39937020 kB' 'MemAvailable: 43517612 kB' 'Buffers: 2704 kB' 'Cached: 15994016 kB' 'SwapCached: 0 kB' 'Active: 13021632 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579872 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554324 kB' 'Mapped: 210644 kB' 'Shmem: 12028832 kB' 'KReclaimable: 212656 kB' 'Slab: 586404 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373748 kB' 'KernelStack: 12912 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.936 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.937 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39938060 kB' 'MemAvailable: 43518652 kB' 'Buffers: 2704 kB' 'Cached: 15994036 kB' 'SwapCached: 0 kB' 'Active: 13021304 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579544 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553988 kB' 'Mapped: 210604 kB' 'Shmem: 12028852 kB' 'KReclaimable: 212656 kB' 'Slab: 586432 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373776 kB' 'KernelStack: 12928 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.938 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.201 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.202 nr_hugepages=1024 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.202 resv_hugepages=0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.202 surplus_hugepages=0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.202 anon_hugepages=0 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.202 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39939620 kB' 'MemAvailable: 43520212 kB' 'Buffers: 2704 kB' 'Cached: 15994056 kB' 'SwapCached: 0 kB' 'Active: 13021464 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579704 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554168 kB' 'Mapped: 210604 kB' 'Shmem: 12028872 kB' 'KReclaimable: 212656 kB' 'Slab: 586424 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373768 kB' 'KernelStack: 12928 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.203 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18223208 kB' 'MemUsed: 14606676 kB' 'SwapCached: 0 kB' 'Active: 8308484 kB' 'Inactive: 3279408 kB' 'Active(anon): 8016912 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11295900 kB' 'Mapped: 125036 kB' 'AnonPages: 295104 kB' 'Shmem: 7724920 kB' 'KernelStack: 8376 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 298028 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.205 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.206 node0=1024 expecting 1024 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.206 00:04:05.206 real 0m2.525s 00:04:05.206 user 0m0.640s 00:04:05.206 sys 0m0.961s 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.206 13:12:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:05.206 ************************************ 00:04:05.206 END TEST default_setup 00:04:05.206 ************************************ 00:04:05.206 13:12:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.206 13:12:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:05.206 13:12:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.206 13:12:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.206 13:12:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.206 ************************************ 00:04:05.206 START TEST per_node_1G_alloc 00:04:05.206 ************************************ 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.206 13:12:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.588 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.588 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:06.588 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39954312 kB' 'MemAvailable: 43534904 kB' 'Buffers: 2704 kB' 'Cached: 15994136 kB' 'SwapCached: 0 kB' 'Active: 13021404 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579644 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553892 kB' 'Mapped: 210704 kB' 'Shmem: 12028952 kB' 'KReclaimable: 212656 kB' 'Slab: 586376 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373720 kB' 'KernelStack: 12896 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.589 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39955276 kB' 'MemAvailable: 43535868 kB' 'Buffers: 2704 kB' 'Cached: 15994136 kB' 'SwapCached: 0 kB' 'Active: 13021800 kB' 'Inactive: 3526128 kB' 'Active(anon): 12580040 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554340 kB' 'Mapped: 210692 kB' 'Shmem: 12028952 kB' 'KReclaimable: 212656 kB' 'Slab: 586360 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373704 kB' 'KernelStack: 12912 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.590 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.591 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39954284 kB' 'MemAvailable: 43534876 kB' 'Buffers: 2704 kB' 'Cached: 15994156 kB' 'SwapCached: 0 kB' 'Active: 13022276 kB' 'Inactive: 3526128 kB' 'Active(anon): 12580516 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554796 kB' 'Mapped: 210616 kB' 'Shmem: 12028972 kB' 'KReclaimable: 212656 kB' 'Slab: 586344 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373688 kB' 'KernelStack: 12992 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.592 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.593 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.594 nr_hugepages=1024 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.594 resv_hugepages=0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.594 surplus_hugepages=0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.594 anon_hugepages=0 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39954700 kB' 'MemAvailable: 43535292 kB' 'Buffers: 2704 kB' 'Cached: 15994180 kB' 'SwapCached: 0 kB' 'Active: 13021304 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579544 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553760 kB' 'Mapped: 210616 kB' 'Shmem: 12028996 kB' 'KReclaimable: 212656 kB' 'Slab: 586344 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373688 kB' 'KernelStack: 12944 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197100 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.594 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.595 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19267504 kB' 'MemUsed: 13562380 kB' 'SwapCached: 0 kB' 'Active: 8309668 kB' 'Inactive: 3279408 kB' 'Active(anon): 8018096 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11295976 kB' 'Mapped: 125048 kB' 'AnonPages: 296248 kB' 'Shmem: 7724996 kB' 'KernelStack: 8408 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 298052 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.596 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.597 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 20686948 kB' 'MemUsed: 7024904 kB' 'SwapCached: 0 kB' 'Active: 4712248 kB' 'Inactive: 246720 kB' 'Active(anon): 4562060 kB' 'Inactive(anon): 0 kB' 'Active(file): 150188 kB' 'Inactive(file): 246720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4700928 kB' 'Mapped: 85568 kB' 'AnonPages: 258064 kB' 'Shmem: 4304020 kB' 'KernelStack: 4552 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 288292 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 181380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.598 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.599 13:12:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.599 node0=512 expecting 512 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:06.599 node1=512 expecting 512 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.599 00:04:06.599 real 0m1.475s 00:04:06.599 user 0m0.616s 00:04:06.599 sys 0m0.822s 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.599 13:12:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.599 ************************************ 00:04:06.599 END TEST per_node_1G_alloc 00:04:06.599 ************************************ 00:04:06.599 13:12:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.599 13:12:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:06.599 13:12:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.599 13:12:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.599 13:12:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.599 ************************************ 00:04:06.599 START TEST even_2G_alloc 00:04:06.599 ************************************ 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.599 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.858 13:12:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.793 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.793 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.793 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.793 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.793 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.793 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.056 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.056 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.056 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.056 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.056 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.056 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.056 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.056 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.056 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.056 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.056 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39975616 kB' 'MemAvailable: 43556208 kB' 'Buffers: 2704 kB' 'Cached: 15994272 kB' 'SwapCached: 0 kB' 'Active: 13021668 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579908 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554056 kB' 'Mapped: 210636 kB' 'Shmem: 12029088 kB' 'KReclaimable: 212656 kB' 'Slab: 586384 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373728 kB' 'KernelStack: 12960 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.056 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.057 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39975952 kB' 'MemAvailable: 43556544 kB' 'Buffers: 2704 kB' 'Cached: 15994276 kB' 'SwapCached: 0 kB' 'Active: 13021964 kB' 'Inactive: 3526128 kB' 'Active(anon): 12580204 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554416 kB' 'Mapped: 210628 kB' 'Shmem: 12029092 kB' 'KReclaimable: 212656 kB' 'Slab: 586344 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373688 kB' 'KernelStack: 12992 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.058 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39976992 kB' 'MemAvailable: 43557584 kB' 'Buffers: 2704 kB' 'Cached: 15994276 kB' 'SwapCached: 0 kB' 'Active: 13021532 kB' 'Inactive: 3526128 kB' 'Active(anon): 12579772 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553980 kB' 'Mapped: 210628 kB' 'Shmem: 12029092 kB' 'KReclaimable: 212656 kB' 'Slab: 586460 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373804 kB' 'KernelStack: 12976 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.059 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.060 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.061 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.323 nr_hugepages=1024 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.323 resv_hugepages=0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.323 surplus_hugepages=0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.323 anon_hugepages=0 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39977172 kB' 'MemAvailable: 43557764 kB' 'Buffers: 2704 kB' 'Cached: 15994312 kB' 'SwapCached: 0 kB' 'Active: 13021868 kB' 'Inactive: 3526128 kB' 'Active(anon): 12580108 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554284 kB' 'Mapped: 210628 kB' 'Shmem: 12029128 kB' 'KReclaimable: 212656 kB' 'Slab: 586460 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373804 kB' 'KernelStack: 12976 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13719880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.323 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.324 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19259456 kB' 'MemUsed: 13570428 kB' 'SwapCached: 0 kB' 'Active: 8308900 kB' 'Inactive: 3279408 kB' 'Active(anon): 8017328 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296100 kB' 'Mapped: 125060 kB' 'AnonPages: 295384 kB' 'Shmem: 7725120 kB' 'KernelStack: 8376 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 298084 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.325 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.326 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 20717324 kB' 'MemUsed: 6994528 kB' 'SwapCached: 0 kB' 'Active: 4714752 kB' 'Inactive: 246720 kB' 'Active(anon): 4564564 kB' 'Inactive(anon): 0 kB' 'Active(file): 150188 kB' 'Inactive(file): 246720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4700940 kB' 'Mapped: 85568 kB' 'AnonPages: 260636 kB' 'Shmem: 4304032 kB' 'KernelStack: 4600 kB' 'PageTables: 3556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 288376 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 181464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.327 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.328 node0=512 expecting 512 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.328 node1=512 expecting 512 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.328 00:04:08.328 real 0m1.549s 00:04:08.328 user 0m0.611s 00:04:08.328 sys 0m0.902s 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.328 13:12:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.328 ************************************ 00:04:08.328 END TEST even_2G_alloc 00:04:08.328 ************************************ 00:04:08.328 13:12:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.328 13:12:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:08.328 13:12:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.328 13:12:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.328 13:12:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.328 ************************************ 00:04:08.328 START TEST odd_alloc 00:04:08.328 ************************************ 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:08.328 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.329 13:12:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.708 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.708 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.708 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.708 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.708 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.708 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.708 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.708 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.708 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.708 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.708 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.708 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.708 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.708 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.708 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.708 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.708 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39984264 kB' 'MemAvailable: 43564856 kB' 'Buffers: 2704 kB' 'Cached: 15994400 kB' 'SwapCached: 0 kB' 'Active: 13018384 kB' 'Inactive: 3526128 kB' 'Active(anon): 12576624 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550560 kB' 'Mapped: 209984 kB' 'Shmem: 12029216 kB' 'KReclaimable: 212656 kB' 'Slab: 586460 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373804 kB' 'KernelStack: 12880 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13706640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.708 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.709 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39992880 kB' 'MemAvailable: 43573472 kB' 'Buffers: 2704 kB' 'Cached: 15994408 kB' 'SwapCached: 0 kB' 'Active: 13015180 kB' 'Inactive: 3526128 kB' 'Active(anon): 12573420 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547392 kB' 'Mapped: 209572 kB' 'Shmem: 12029224 kB' 'KReclaimable: 212656 kB' 'Slab: 586468 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373812 kB' 'KernelStack: 12928 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13703212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.710 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.711 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39989920 kB' 'MemAvailable: 43570512 kB' 'Buffers: 2704 kB' 'Cached: 15994424 kB' 'SwapCached: 0 kB' 'Active: 13018312 kB' 'Inactive: 3526128 kB' 'Active(anon): 12576552 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550508 kB' 'Mapped: 209512 kB' 'Shmem: 12029240 kB' 'KReclaimable: 212656 kB' 'Slab: 586564 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373908 kB' 'KernelStack: 12928 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13706676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.712 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.713 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:09.714 nr_hugepages=1025 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.714 resv_hugepages=0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.714 surplus_hugepages=0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.714 anon_hugepages=0 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39989672 kB' 'MemAvailable: 43570264 kB' 'Buffers: 2704 kB' 'Cached: 15994444 kB' 'SwapCached: 0 kB' 'Active: 13019056 kB' 'Inactive: 3526128 kB' 'Active(anon): 12577296 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551292 kB' 'Mapped: 209980 kB' 'Shmem: 12029260 kB' 'KReclaimable: 212656 kB' 'Slab: 586564 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373908 kB' 'KernelStack: 12976 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 13708828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.714 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.715 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19261012 kB' 'MemUsed: 13568872 kB' 'SwapCached: 0 kB' 'Active: 8307760 kB' 'Inactive: 3279408 kB' 'Active(anon): 8016188 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296188 kB' 'Mapped: 124632 kB' 'AnonPages: 294188 kB' 'Shmem: 7725208 kB' 'KernelStack: 8344 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 298080 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.716 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 20730988 kB' 'MemUsed: 6980864 kB' 'SwapCached: 0 kB' 'Active: 4711448 kB' 'Inactive: 246720 kB' 'Active(anon): 4561260 kB' 'Inactive(anon): 0 kB' 'Active(file): 150188 kB' 'Inactive(file): 246720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4700960 kB' 'Mapped: 85348 kB' 'AnonPages: 257272 kB' 'Shmem: 4304052 kB' 'KernelStack: 4600 kB' 'PageTables: 3480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 288468 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 181556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.717 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.718 13:12:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:09.978 node0=512 expecting 513 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:09.978 node1=513 expecting 512 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:09.978 00:04:09.978 real 0m1.528s 00:04:09.978 user 0m0.614s 00:04:09.978 sys 0m0.877s 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.978 13:12:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.978 ************************************ 00:04:09.978 END TEST odd_alloc 00:04:09.978 ************************************ 00:04:09.978 13:12:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.978 13:12:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:09.978 13:12:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.978 13:12:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.978 13:12:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.978 ************************************ 00:04:09.978 START TEST custom_alloc 00:04:09.978 ************************************ 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.978 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.979 13:12:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.913 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.913 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.913 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.913 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.913 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.913 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.914 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.914 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.914 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.914 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.914 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.914 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.914 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.914 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.914 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.914 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.914 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.176 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38938320 kB' 'MemAvailable: 42518912 kB' 'Buffers: 2704 kB' 'Cached: 15994536 kB' 'SwapCached: 0 kB' 'Active: 13016756 kB' 'Inactive: 3526128 kB' 'Active(anon): 12574996 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549048 kB' 'Mapped: 209572 kB' 'Shmem: 12029352 kB' 'KReclaimable: 212656 kB' 'Slab: 586060 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373404 kB' 'KernelStack: 13056 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13704372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.177 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38944452 kB' 'MemAvailable: 42525044 kB' 'Buffers: 2704 kB' 'Cached: 15994536 kB' 'SwapCached: 0 kB' 'Active: 13019792 kB' 'Inactive: 3526128 kB' 'Active(anon): 12578032 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551984 kB' 'Mapped: 209572 kB' 'Shmem: 12029352 kB' 'KReclaimable: 212656 kB' 'Slab: 586052 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373396 kB' 'KernelStack: 12960 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13707568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.178 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.179 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38945368 kB' 'MemAvailable: 42525960 kB' 'Buffers: 2704 kB' 'Cached: 15994536 kB' 'SwapCached: 0 kB' 'Active: 13019408 kB' 'Inactive: 3526128 kB' 'Active(anon): 12577648 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551524 kB' 'Mapped: 209924 kB' 'Shmem: 12029352 kB' 'KReclaimable: 212656 kB' 'Slab: 586012 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373356 kB' 'KernelStack: 12912 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13707592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.180 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.181 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:11.182 nr_hugepages=1536 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.182 resv_hugepages=0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.182 surplus_hugepages=0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.182 anon_hugepages=0 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 38942796 kB' 'MemAvailable: 42523388 kB' 'Buffers: 2704 kB' 'Cached: 15994572 kB' 'SwapCached: 0 kB' 'Active: 13014824 kB' 'Inactive: 3526128 kB' 'Active(anon): 12573064 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546940 kB' 'Mapped: 209488 kB' 'Shmem: 12029388 kB' 'KReclaimable: 212656 kB' 'Slab: 586072 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373416 kB' 'KernelStack: 12944 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 13702644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.182 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.183 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.184 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19274764 kB' 'MemUsed: 13555120 kB' 'SwapCached: 0 kB' 'Active: 8308148 kB' 'Inactive: 3279408 kB' 'Active(anon): 8016576 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296332 kB' 'Mapped: 124176 kB' 'AnonPages: 294424 kB' 'Shmem: 7725352 kB' 'KernelStack: 8264 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 297780 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.443 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.444 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 19667280 kB' 'MemUsed: 8044572 kB' 'SwapCached: 0 kB' 'Active: 4710956 kB' 'Inactive: 246720 kB' 'Active(anon): 4560768 kB' 'Inactive(anon): 0 kB' 'Active(file): 150188 kB' 'Inactive(file): 246720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4700972 kB' 'Mapped: 85348 kB' 'AnonPages: 256796 kB' 'Shmem: 4304064 kB' 'KernelStack: 4600 kB' 'PageTables: 3316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106912 kB' 'Slab: 288292 kB' 'SReclaimable: 106912 kB' 'SUnreclaim: 181380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.445 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.446 node0=512 expecting 512 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:11.446 node1=1024 expecting 1024 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:11.446 00:04:11.446 real 0m1.467s 00:04:11.446 user 0m0.591s 00:04:11.446 sys 0m0.840s 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.446 13:12:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.447 ************************************ 00:04:11.447 END TEST custom_alloc 00:04:11.447 ************************************ 00:04:11.447 13:12:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.447 13:12:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:11.447 13:12:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.447 13:12:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.447 13:12:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.447 ************************************ 00:04:11.447 START TEST no_shrink_alloc 00:04:11.447 ************************************ 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.447 13:12:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.827 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.827 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.827 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.827 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.827 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.827 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.827 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.827 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.827 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.827 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.827 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.827 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.827 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.827 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.827 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.827 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.827 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39969096 kB' 'MemAvailable: 43549688 kB' 'Buffers: 2704 kB' 'Cached: 15994664 kB' 'SwapCached: 0 kB' 'Active: 13012584 kB' 'Inactive: 3526128 kB' 'Active(anon): 12570824 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544580 kB' 'Mapped: 209204 kB' 'Shmem: 12029480 kB' 'KReclaimable: 212656 kB' 'Slab: 586508 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373852 kB' 'KernelStack: 12848 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13698368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.827 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.828 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39968844 kB' 'MemAvailable: 43549436 kB' 'Buffers: 2704 kB' 'Cached: 15994668 kB' 'SwapCached: 0 kB' 'Active: 13012796 kB' 'Inactive: 3526128 kB' 'Active(anon): 12571036 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544796 kB' 'Mapped: 209164 kB' 'Shmem: 12029484 kB' 'KReclaimable: 212656 kB' 'Slab: 586500 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373844 kB' 'KernelStack: 12880 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13698384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.829 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.830 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39968592 kB' 'MemAvailable: 43549184 kB' 'Buffers: 2704 kB' 'Cached: 15994688 kB' 'SwapCached: 0 kB' 'Active: 13012232 kB' 'Inactive: 3526128 kB' 'Active(anon): 12570472 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544188 kB' 'Mapped: 209088 kB' 'Shmem: 12029504 kB' 'KReclaimable: 212656 kB' 'Slab: 586516 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373860 kB' 'KernelStack: 12880 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13699776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.831 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.832 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.833 nr_hugepages=1024 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.833 resv_hugepages=0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.833 surplus_hugepages=0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.833 anon_hugepages=0 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39968604 kB' 'MemAvailable: 43549196 kB' 'Buffers: 2704 kB' 'Cached: 15994704 kB' 'SwapCached: 0 kB' 'Active: 13013056 kB' 'Inactive: 3526128 kB' 'Active(anon): 12571296 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545016 kB' 'Mapped: 209088 kB' 'Shmem: 12029520 kB' 'KReclaimable: 212656 kB' 'Slab: 586516 kB' 'SReclaimable: 212656 kB' 'SUnreclaim: 373860 kB' 'KernelStack: 12912 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13699424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.833 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.834 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18223560 kB' 'MemUsed: 14606324 kB' 'SwapCached: 0 kB' 'Active: 8308616 kB' 'Inactive: 3279408 kB' 'Active(anon): 8017044 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296436 kB' 'Mapped: 123748 kB' 'AnonPages: 294712 kB' 'Shmem: 7725456 kB' 'KernelStack: 8808 kB' 'PageTables: 5932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105744 kB' 'Slab: 298084 kB' 'SReclaimable: 105744 kB' 'SUnreclaim: 192340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.835 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.837 node0=1024 expecting 1024 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.837 13:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.218 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.218 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.218 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.218 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.218 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.218 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.218 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.218 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.218 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.218 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.218 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.218 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.218 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.218 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.218 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.218 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.218 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.218 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39989748 kB' 'MemAvailable: 43570324 kB' 'Buffers: 2704 kB' 'Cached: 15994772 kB' 'SwapCached: 0 kB' 'Active: 13015728 kB' 'Inactive: 3526128 kB' 'Active(anon): 12573968 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547588 kB' 'Mapped: 210056 kB' 'Shmem: 12029588 kB' 'KReclaimable: 212624 kB' 'Slab: 586316 kB' 'SReclaimable: 212624 kB' 'SUnreclaim: 373692 kB' 'KernelStack: 12848 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13701716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.218 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.219 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39990208 kB' 'MemAvailable: 43570784 kB' 'Buffers: 2704 kB' 'Cached: 15994776 kB' 'SwapCached: 0 kB' 'Active: 13019144 kB' 'Inactive: 3526128 kB' 'Active(anon): 12577384 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550928 kB' 'Mapped: 209628 kB' 'Shmem: 12029592 kB' 'KReclaimable: 212624 kB' 'Slab: 586296 kB' 'SReclaimable: 212624 kB' 'SUnreclaim: 373672 kB' 'KernelStack: 12816 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13704520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197068 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.220 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.221 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39990208 kB' 'MemAvailable: 43570784 kB' 'Buffers: 2704 kB' 'Cached: 15994800 kB' 'SwapCached: 0 kB' 'Active: 13012588 kB' 'Inactive: 3526128 kB' 'Active(anon): 12570828 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544488 kB' 'Mapped: 209524 kB' 'Shmem: 12029616 kB' 'KReclaimable: 212624 kB' 'Slab: 586328 kB' 'SReclaimable: 212624 kB' 'SUnreclaim: 373704 kB' 'KernelStack: 12896 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13698792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.222 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.223 nr_hugepages=1024 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.223 resv_hugepages=0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.223 surplus_hugepages=0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.223 anon_hugepages=0 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.223 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39989960 kB' 'MemAvailable: 43570536 kB' 'Buffers: 2704 kB' 'Cached: 15994804 kB' 'SwapCached: 0 kB' 'Active: 13012248 kB' 'Inactive: 3526128 kB' 'Active(anon): 12570488 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544188 kB' 'Mapped: 209524 kB' 'Shmem: 12029620 kB' 'KReclaimable: 212624 kB' 'Slab: 586328 kB' 'SReclaimable: 212624 kB' 'SUnreclaim: 373704 kB' 'KernelStack: 12880 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 13698816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 37824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1791580 kB' 'DirectMap2M: 13856768 kB' 'DirectMap1G: 53477376 kB' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.224 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18230228 kB' 'MemUsed: 14599656 kB' 'SwapCached: 0 kB' 'Active: 8307968 kB' 'Inactive: 3279408 kB' 'Active(anon): 8016396 kB' 'Inactive(anon): 0 kB' 'Active(file): 291572 kB' 'Inactive(file): 3279408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11296500 kB' 'Mapped: 123756 kB' 'AnonPages: 294176 kB' 'Shmem: 7725520 kB' 'KernelStack: 8360 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105728 kB' 'Slab: 298052 kB' 'SReclaimable: 105728 kB' 'SUnreclaim: 192324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.225 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.226 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.227 node0=1024 expecting 1024 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.227 00:04:14.227 real 0m2.943s 00:04:14.227 user 0m1.144s 00:04:14.227 sys 0m1.717s 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.227 13:12:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.227 ************************************ 00:04:14.227 END TEST no_shrink_alloc 00:04:14.227 ************************************ 00:04:14.485 13:12:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.485 13:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.485 00:04:14.485 real 0m11.900s 00:04:14.485 user 0m4.398s 00:04:14.485 sys 0m6.374s 00:04:14.486 13:12:11 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.486 13:12:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.486 ************************************ 00:04:14.486 END TEST hugepages 00:04:14.486 ************************************ 00:04:14.486 13:12:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.486 13:12:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:14.486 13:12:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.486 13:12:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.486 13:12:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.486 ************************************ 00:04:14.486 START TEST driver 00:04:14.486 ************************************ 00:04:14.486 13:12:11 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:14.486 * Looking for test storage... 00:04:14.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:14.486 13:12:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:14.486 13:12:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.486 13:12:11 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.056 13:12:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:17.056 13:12:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.056 13:12:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.056 13:12:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.056 ************************************ 00:04:17.056 START TEST guess_driver 00:04:17.056 ************************************ 00:04:17.056 13:12:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:17.056 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:17.056 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:17.056 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:17.057 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:17.057 Looking for driver=vfio-pci 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.057 13:12:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.434 13:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.371 13:12:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.371 13:12:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.371 13:12:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.628 13:12:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.628 13:12:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:19.628 13:12:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.628 13:12:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.222 00:04:22.222 real 0m5.074s 00:04:22.222 user 0m1.179s 00:04:22.222 sys 0m1.900s 00:04:22.222 13:12:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.222 13:12:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.222 ************************************ 00:04:22.222 END TEST guess_driver 00:04:22.222 ************************************ 00:04:22.222 13:12:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:22.222 00:04:22.222 real 0m7.732s 00:04:22.222 user 0m1.751s 00:04:22.222 sys 0m2.944s 00:04:22.222 13:12:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.222 13:12:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.222 ************************************ 00:04:22.222 END TEST driver 00:04:22.222 ************************************ 00:04:22.222 13:12:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.222 13:12:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:22.222 13:12:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.222 13:12:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.222 13:12:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.222 ************************************ 00:04:22.222 START TEST devices 00:04:22.222 ************************************ 00:04:22.222 13:12:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:22.222 * Looking for test storage... 00:04:22.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.222 13:12:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.222 13:12:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:22.222 13:12:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.222 13:12:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.601 13:12:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:23.601 13:12:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.601 13:12:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:23.601 13:12:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.859 No valid GPT data, bailing 00:04:23.859 13:12:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.859 13:12:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.859 13:12:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.859 13:12:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.859 13:12:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.859 13:12:21 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:23.859 13:12:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.860 13:12:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.860 13:12:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.860 13:12:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.860 13:12:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.860 ************************************ 00:04:23.860 START TEST nvme_mount 00:04:23.860 ************************************ 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.860 13:12:21 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.800 Creating new GPT entries in memory. 00:04:24.800 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.800 other utilities. 00:04:24.800 13:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.800 13:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.800 13:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.800 13:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.800 13:12:22 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.740 Creating new GPT entries in memory. 00:04:25.740 The operation has completed successfully. 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3432257 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.740 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.997 13:12:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.933 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:26.934 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.193 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.193 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.453 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.453 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.453 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.453 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.453 13:12:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.830 13:12:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.830 13:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.207 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.207 00:04:30.207 real 0m6.490s 00:04:30.207 user 0m1.538s 00:04:30.207 sys 0m2.546s 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.207 13:12:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:30.207 ************************************ 00:04:30.207 END TEST nvme_mount 00:04:30.207 ************************************ 00:04:30.208 13:12:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:30.208 13:12:27 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:30.208 13:12:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.208 13:12:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.208 13:12:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.208 ************************************ 00:04:30.208 START TEST dm_mount 00:04:30.208 ************************************ 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.208 13:12:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.590 Creating new GPT entries in memory. 00:04:31.590 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.590 other utilities. 00:04:31.590 13:12:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.590 13:12:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.590 13:12:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.590 13:12:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.590 13:12:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:32.530 Creating new GPT entries in memory. 00:04:32.530 The operation has completed successfully. 00:04:32.530 13:12:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.530 13:12:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.530 13:12:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.530 13:12:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.530 13:12:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:33.469 The operation has completed successfully. 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3434647 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.469 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.470 13:12:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.895 13:12:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:35.832 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.090 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:36.349 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:36.349 00:04:36.349 real 0m5.932s 00:04:36.349 user 0m1.141s 00:04:36.349 sys 0m1.665s 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.349 13:12:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:36.349 ************************************ 00:04:36.349 END TEST dm_mount 00:04:36.349 ************************************ 00:04:36.349 13:12:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.349 13:12:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.608 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:36.608 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:36.608 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.608 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.608 13:12:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:36.608 00:04:36.608 real 0m14.361s 00:04:36.608 user 0m3.335s 00:04:36.608 sys 0m5.250s 00:04:36.608 13:12:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.608 13:12:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:36.608 ************************************ 00:04:36.608 END TEST devices 00:04:36.608 ************************************ 00:04:36.608 13:12:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:36.608 00:04:36.608 real 0m45.141s 00:04:36.608 user 0m12.947s 00:04:36.608 sys 0m20.310s 00:04:36.608 13:12:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.608 13:12:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.608 ************************************ 00:04:36.608 END TEST setup.sh 00:04:36.608 ************************************ 00:04:36.608 13:12:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.608 13:12:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:37.986 Hugepages 00:04:37.986 node hugesize free / total 00:04:37.986 node0 1048576kB 0 / 0 00:04:37.986 node0 2048kB 2048 / 2048 00:04:37.986 node1 1048576kB 0 / 0 00:04:37.986 node1 2048kB 0 / 0 00:04:37.986 00:04:37.986 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.986 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:37.986 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:37.986 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:37.986 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:37.986 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:37.986 13:12:35 -- spdk/autotest.sh@130 -- # uname -s 00:04:37.986 13:12:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:37.986 13:12:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:37.986 13:12:35 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.369 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:39.369 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:39.369 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.306 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:40.306 13:12:37 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:41.687 13:12:38 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:41.687 13:12:38 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:41.687 13:12:38 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.687 13:12:38 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:41.687 13:12:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:41.687 13:12:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:41.687 13:12:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.687 13:12:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.687 13:12:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:41.687 13:12:38 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:41.687 13:12:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:41.687 13:12:38 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.621 Waiting for block devices as requested 00:04:42.621 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:42.621 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:42.880 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:42.880 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:42.880 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:42.880 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:43.138 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:43.138 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:43.138 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:43.397 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:43.397 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:43.397 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:43.656 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:43.656 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:43.656 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:43.656 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:43.914 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:43.914 13:12:41 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:43.914 13:12:41 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:04:43.914 13:12:41 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:43.914 13:12:41 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:43.914 13:12:41 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:43.914 13:12:41 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:43.914 13:12:41 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:43.914 13:12:41 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:43.914 13:12:41 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:43.914 13:12:41 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:43.914 13:12:41 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:43.914 13:12:41 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:43.914 13:12:41 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:43.914 13:12:41 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:43.914 13:12:41 -- common/autotest_common.sh@1557 -- # continue 00:04:43.914 13:12:41 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:43.914 13:12:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.914 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.914 13:12:41 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:43.914 13:12:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.914 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.914 13:12:41 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.292 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:45.292 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:45.292 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:46.227 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.486 13:12:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:46.486 13:12:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.486 13:12:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.486 13:12:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:46.486 13:12:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:46.486 13:12:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.486 13:12:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:46.486 13:12:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:46.486 13:12:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:46.486 13:12:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:46.486 13:12:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:46.486 13:12:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.486 13:12:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:46.486 13:12:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:46.486 13:12:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:46.486 13:12:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:46.486 13:12:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.486 13:12:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:46.486 13:12:43 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:46.486 13:12:43 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:46.486 13:12:43 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:46.486 13:12:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:04:46.486 13:12:43 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:04:46.486 13:12:43 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3439952 00:04:46.486 13:12:43 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.486 13:12:43 -- common/autotest_common.sh@1598 -- # waitforlisten 3439952 00:04:46.486 13:12:43 -- common/autotest_common.sh@829 -- # '[' -z 3439952 ']' 00:04:46.486 13:12:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.486 13:12:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.486 13:12:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.486 13:12:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.486 13:12:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.486 [2024-07-12 13:12:43.886829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:04:46.486 [2024-07-12 13:12:43.886935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439952 ] 00:04:46.486 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.486 [2024-07-12 13:12:43.918744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.486 [2024-07-12 13:12:43.944201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.745 [2024-07-12 13:12:44.025361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.003 13:12:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.003 13:12:44 -- common/autotest_common.sh@862 -- # return 0 00:04:47.003 13:12:44 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:47.003 13:12:44 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:47.003 13:12:44 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:50.281 nvme0n1 00:04:50.281 13:12:47 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:50.281 [2024-07-12 13:12:47.548694] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:50.281 [2024-07-12 13:12:47.548733] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:50.281 request: 00:04:50.281 { 00:04:50.281 "nvme_ctrlr_name": "nvme0", 00:04:50.281 "password": "test", 00:04:50.281 "method": "bdev_nvme_opal_revert", 00:04:50.281 "req_id": 1 00:04:50.281 } 00:04:50.281 Got JSON-RPC error response 00:04:50.281 response: 00:04:50.281 { 00:04:50.281 "code": -32603, 00:04:50.281 "message": "Internal error" 00:04:50.281 } 00:04:50.281 13:12:47 -- common/autotest_common.sh@1604 -- # true 00:04:50.281 13:12:47 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:50.281 13:12:47 -- common/autotest_common.sh@1608 -- # killprocess 3439952 00:04:50.281 13:12:47 -- common/autotest_common.sh@948 -- # '[' -z 3439952 ']' 00:04:50.281 13:12:47 -- common/autotest_common.sh@952 -- # kill -0 3439952 00:04:50.281 13:12:47 -- common/autotest_common.sh@953 -- # uname 00:04:50.281 13:12:47 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.281 13:12:47 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439952 00:04:50.281 13:12:47 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.281 13:12:47 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.281 13:12:47 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439952' 00:04:50.281 killing process with pid 3439952 00:04:50.281 13:12:47 -- common/autotest_common.sh@967 -- # kill 3439952 00:04:50.281 13:12:47 -- common/autotest_common.sh@972 -- # wait 3439952 00:04:52.209 13:12:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:52.209 13:12:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:52.209 13:12:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.209 13:12:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.209 13:12:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:52.209 13:12:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.209 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:04:52.209 13:12:49 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:52.209 13:12:49 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.209 13:12:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.209 13:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.209 13:12:49 -- common/autotest_common.sh@10 -- # set +x 00:04:52.209 ************************************ 00:04:52.209 START TEST env 00:04:52.209 ************************************ 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.209 * Looking for test storage... 00:04:52.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:52.209 13:12:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.209 13:12:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.209 ************************************ 00:04:52.209 START TEST env_memory 00:04:52.209 ************************************ 00:04:52.209 13:12:49 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.209 00:04:52.209 00:04:52.209 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.209 http://cunit.sourceforge.net/ 00:04:52.209 00:04:52.209 00:04:52.209 Suite: memory 00:04:52.209 Test: alloc and free memory map ...[2024-07-12 13:12:49.431616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:52.209 passed 00:04:52.209 Test: mem map translation ...[2024-07-12 13:12:49.452457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:52.209 [2024-07-12 13:12:49.452480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:52.209 [2024-07-12 13:12:49.452540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:52.209 [2024-07-12 13:12:49.452553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:52.209 passed 00:04:52.209 Test: mem map registration ...[2024-07-12 13:12:49.494122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:52.209 [2024-07-12 13:12:49.494142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:52.209 passed 00:04:52.209 Test: mem map adjacent registrations ...passed 00:04:52.209 00:04:52.209 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.209 suites 1 1 n/a 0 0 00:04:52.209 tests 4 4 4 0 0 00:04:52.209 asserts 152 152 152 0 n/a 00:04:52.209 00:04:52.209 Elapsed time = 0.140 seconds 00:04:52.209 00:04:52.209 real 0m0.147s 00:04:52.209 user 0m0.138s 00:04:52.209 sys 0m0.009s 00:04:52.209 13:12:49 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.209 13:12:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:52.209 ************************************ 00:04:52.209 END TEST env_memory 00:04:52.209 ************************************ 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1142 -- # return 0 00:04:52.209 13:12:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.209 13:12:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.209 13:12:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.209 ************************************ 00:04:52.209 START TEST env_vtophys 00:04:52.209 ************************************ 00:04:52.209 13:12:49 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.209 EAL: lib.eal log level changed from notice to debug 00:04:52.209 EAL: Detected lcore 0 as core 0 on socket 0 00:04:52.209 EAL: Detected lcore 1 as core 1 on socket 0 00:04:52.209 EAL: Detected lcore 2 as core 2 on socket 0 00:04:52.209 EAL: Detected lcore 3 as core 3 on socket 0 00:04:52.209 EAL: Detected lcore 4 as core 4 on socket 0 00:04:52.209 EAL: Detected lcore 5 as core 5 on socket 0 00:04:52.209 EAL: Detected lcore 6 as core 8 on socket 0 00:04:52.209 EAL: Detected lcore 7 as core 9 on socket 0 00:04:52.209 EAL: Detected lcore 8 as core 10 on socket 0 00:04:52.209 EAL: Detected lcore 9 as core 11 on socket 0 00:04:52.209 EAL: Detected lcore 10 as core 12 on socket 0 00:04:52.209 EAL: Detected lcore 11 as core 13 on socket 0 00:04:52.209 EAL: Detected lcore 12 as core 0 on socket 1 00:04:52.209 EAL: Detected lcore 13 as core 1 on socket 1 00:04:52.209 EAL: Detected lcore 14 as core 2 on socket 1 00:04:52.209 EAL: Detected lcore 15 as core 3 on socket 1 00:04:52.209 EAL: Detected lcore 16 as core 4 on socket 1 00:04:52.209 EAL: Detected lcore 17 as core 5 on socket 1 00:04:52.209 EAL: Detected lcore 18 as core 8 on socket 1 00:04:52.209 EAL: Detected lcore 19 as core 9 on socket 1 00:04:52.209 EAL: Detected lcore 20 as core 10 on socket 1 00:04:52.209 EAL: Detected lcore 21 as core 11 on socket 1 00:04:52.209 EAL: Detected lcore 22 as core 12 on socket 1 00:04:52.209 EAL: Detected lcore 23 as core 13 on socket 1 00:04:52.209 EAL: Detected lcore 24 as core 0 on socket 0 00:04:52.209 EAL: Detected lcore 25 as core 1 on socket 0 00:04:52.209 EAL: Detected lcore 26 as core 2 on socket 0 00:04:52.209 EAL: Detected lcore 27 as core 3 on socket 0 00:04:52.209 EAL: Detected lcore 28 as core 4 on socket 0 00:04:52.210 EAL: Detected lcore 29 as core 5 on socket 0 00:04:52.210 EAL: Detected lcore 30 as core 8 on socket 0 00:04:52.210 EAL: Detected lcore 31 as core 9 on socket 0 00:04:52.210 EAL: Detected lcore 32 as core 10 on socket 0 00:04:52.210 EAL: Detected lcore 33 as core 11 on socket 0 00:04:52.210 EAL: Detected lcore 34 as core 12 on socket 0 00:04:52.210 EAL: Detected lcore 35 as core 13 on socket 0 00:04:52.210 EAL: Detected lcore 36 as core 0 on socket 1 00:04:52.210 EAL: Detected lcore 37 as core 1 on socket 1 00:04:52.210 EAL: Detected lcore 38 as core 2 on socket 1 00:04:52.210 EAL: Detected lcore 39 as core 3 on socket 1 00:04:52.210 EAL: Detected lcore 40 as core 4 on socket 1 00:04:52.210 EAL: Detected lcore 41 as core 5 on socket 1 00:04:52.210 EAL: Detected lcore 42 as core 8 on socket 1 00:04:52.210 EAL: Detected lcore 43 as core 9 on socket 1 00:04:52.210 EAL: Detected lcore 44 as core 10 on socket 1 00:04:52.210 EAL: Detected lcore 45 as core 11 on socket 1 00:04:52.210 EAL: Detected lcore 46 as core 12 on socket 1 00:04:52.210 EAL: Detected lcore 47 as core 13 on socket 1 00:04:52.210 EAL: Maximum logical cores by configuration: 128 00:04:52.210 EAL: Detected CPU lcores: 48 00:04:52.210 EAL: Detected NUMA nodes: 2 00:04:52.210 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:52.210 EAL: Detected shared linkage of DPDK 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:52.210 EAL: Registered [vdev] bus. 00:04:52.210 EAL: bus.vdev log level changed from disabled to notice 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:52.210 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:52.210 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:52.210 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:52.210 EAL: No shared files mode enabled, IPC will be disabled 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Bus pci wants IOVA as 'DC' 00:04:52.210 EAL: Bus vdev wants IOVA as 'DC' 00:04:52.210 EAL: Buses did not request a specific IOVA mode. 00:04:52.210 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:52.210 EAL: Selected IOVA mode 'VA' 00:04:52.210 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.210 EAL: Probing VFIO support... 00:04:52.210 EAL: IOMMU type 1 (Type 1) is supported 00:04:52.210 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:52.210 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:52.210 EAL: VFIO support initialized 00:04:52.210 EAL: Ask a virtual area of 0x2e000 bytes 00:04:52.210 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:52.210 EAL: Setting up physically contiguous memory... 00:04:52.210 EAL: Setting maximum number of open files to 524288 00:04:52.210 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:52.210 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:52.210 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:52.210 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:52.210 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.210 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:52.210 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.210 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.210 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:52.210 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:52.210 EAL: Hugepages will be freed exactly as allocated. 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: TSC frequency is ~2700000 KHz 00:04:52.210 EAL: Main lcore 0 is ready (tid=7fc4e913aa00;cpuset=[0]) 00:04:52.210 EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.210 EAL: Restoring previous memory policy: 0 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was expanded by 2MB 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Mem event callback 'spdk:(nil)' registered 00:04:52.210 00:04:52.210 00:04:52.210 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.210 http://cunit.sourceforge.net/ 00:04:52.210 00:04:52.210 00:04:52.210 Suite: components_suite 00:04:52.210 Test: vtophys_malloc_test ...passed 00:04:52.210 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.210 EAL: Restoring previous memory policy: 4 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was expanded by 4MB 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was shrunk by 4MB 00:04:52.210 EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.210 EAL: Restoring previous memory policy: 4 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was expanded by 6MB 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was shrunk by 6MB 00:04:52.210 EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.210 EAL: Restoring previous memory policy: 4 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was expanded by 10MB 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was shrunk by 10MB 00:04:52.210 EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.210 EAL: Restoring previous memory policy: 4 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was expanded by 18MB 00:04:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.210 EAL: request: mp_malloc_sync 00:04:52.210 EAL: No shared files mode enabled, IPC is disabled 00:04:52.210 EAL: Heap on socket 0 was shrunk by 18MB 00:04:52.210 EAL: Trying to obtain current memory policy. 00:04:52.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.467 EAL: Restoring previous memory policy: 4 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was expanded by 34MB 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was shrunk by 34MB 00:04:52.467 EAL: Trying to obtain current memory policy. 00:04:52.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.467 EAL: Restoring previous memory policy: 4 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was expanded by 66MB 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was shrunk by 66MB 00:04:52.467 EAL: Trying to obtain current memory policy. 00:04:52.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.467 EAL: Restoring previous memory policy: 4 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was expanded by 130MB 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was shrunk by 130MB 00:04:52.467 EAL: Trying to obtain current memory policy. 00:04:52.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.467 EAL: Restoring previous memory policy: 4 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was expanded by 258MB 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.724 EAL: request: mp_malloc_sync 00:04:52.724 EAL: No shared files mode enabled, IPC is disabled 00:04:52.724 EAL: Heap on socket 0 was shrunk by 258MB 00:04:52.724 EAL: Trying to obtain current memory policy. 00:04:52.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.724 EAL: Restoring previous memory policy: 4 00:04:52.724 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.724 EAL: request: mp_malloc_sync 00:04:52.724 EAL: No shared files mode enabled, IPC is disabled 00:04:52.724 EAL: Heap on socket 0 was expanded by 514MB 00:04:52.981 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.981 EAL: request: mp_malloc_sync 00:04:52.981 EAL: No shared files mode enabled, IPC is disabled 00:04:52.981 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.981 EAL: Trying to obtain current memory policy. 00:04:52.981 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.239 EAL: Restoring previous memory policy: 4 00:04:53.239 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.239 EAL: request: mp_malloc_sync 00:04:53.239 EAL: No shared files mode enabled, IPC is disabled 00:04:53.239 EAL: Heap on socket 0 was expanded by 1026MB 00:04:53.495 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.751 EAL: request: mp_malloc_sync 00:04:53.751 EAL: No shared files mode enabled, IPC is disabled 00:04:53.751 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:53.751 passed 00:04:53.751 00:04:53.751 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.751 suites 1 1 n/a 0 0 00:04:53.751 tests 2 2 2 0 0 00:04:53.751 asserts 497 497 497 0 n/a 00:04:53.751 00:04:53.751 Elapsed time = 1.333 seconds 00:04:53.751 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.751 EAL: request: mp_malloc_sync 00:04:53.751 EAL: No shared files mode enabled, IPC is disabled 00:04:53.751 EAL: Heap on socket 0 was shrunk by 2MB 00:04:53.751 EAL: No shared files mode enabled, IPC is disabled 00:04:53.751 EAL: No shared files mode enabled, IPC is disabled 00:04:53.751 EAL: No shared files mode enabled, IPC is disabled 00:04:53.751 00:04:53.751 real 0m1.442s 00:04:53.751 user 0m0.853s 00:04:53.751 sys 0m0.558s 00:04:53.751 13:12:51 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.751 13:12:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:53.751 ************************************ 00:04:53.751 END TEST env_vtophys 00:04:53.751 ************************************ 00:04:53.751 13:12:51 env -- common/autotest_common.sh@1142 -- # return 0 00:04:53.751 13:12:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.751 13:12:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.751 13:12:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.751 13:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.751 ************************************ 00:04:53.751 START TEST env_pci 00:04:53.751 ************************************ 00:04:53.751 13:12:51 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.751 00:04:53.751 00:04:53.751 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.751 http://cunit.sourceforge.net/ 00:04:53.751 00:04:53.751 00:04:53.751 Suite: pci 00:04:53.751 Test: pci_hook ...[2024-07-12 13:12:51.097289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3440843 has claimed it 00:04:53.751 EAL: Cannot find device (10000:00:01.0) 00:04:53.751 EAL: Failed to attach device on primary process 00:04:53.751 passed 00:04:53.751 00:04:53.751 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.751 suites 1 1 n/a 0 0 00:04:53.751 tests 1 1 1 0 0 00:04:53.751 asserts 25 25 25 0 n/a 00:04:53.751 00:04:53.751 Elapsed time = 0.020 seconds 00:04:53.751 00:04:53.751 real 0m0.033s 00:04:53.751 user 0m0.013s 00:04:53.751 sys 0m0.020s 00:04:53.752 13:12:51 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.752 13:12:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:53.752 ************************************ 00:04:53.752 END TEST env_pci 00:04:53.752 ************************************ 00:04:53.752 13:12:51 env -- common/autotest_common.sh@1142 -- # return 0 00:04:53.752 13:12:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:53.752 13:12:51 env -- env/env.sh@15 -- # uname 00:04:53.752 13:12:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:53.752 13:12:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:53.752 13:12:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.752 13:12:51 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:53.752 13:12:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.752 13:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.752 ************************************ 00:04:53.752 START TEST env_dpdk_post_init 00:04:53.752 ************************************ 00:04:53.752 13:12:51 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.752 EAL: Detected CPU lcores: 48 00:04:53.752 EAL: Detected NUMA nodes: 2 00:04:53.752 EAL: Detected shared linkage of DPDK 00:04:53.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.752 EAL: Selected IOVA mode 'VA' 00:04:53.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.752 EAL: VFIO support initialized 00:04:54.010 EAL: Using IOMMU type 1 (Type 1) 00:04:58.195 Starting DPDK initialization... 00:04:58.195 Starting SPDK post initialization... 00:04:58.195 SPDK NVMe probe 00:04:58.195 Attaching to 0000:0b:00.0 00:04:58.195 Attached to 0000:0b:00.0 00:04:58.195 Cleaning up... 00:04:58.195 00:04:58.195 real 0m4.320s 00:04:58.195 user 0m3.185s 00:04:58.195 sys 0m0.189s 00:04:58.196 13:12:55 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.196 13:12:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.196 ************************************ 00:04:58.196 END TEST env_dpdk_post_init 00:04:58.196 ************************************ 00:04:58.196 13:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.196 13:12:55 env -- env/env.sh@26 -- # uname 00:04:58.196 13:12:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.196 13:12:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.196 13:12:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.196 13:12:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.196 13:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.196 ************************************ 00:04:58.196 START TEST env_mem_callbacks 00:04:58.196 ************************************ 00:04:58.196 13:12:55 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.196 EAL: Detected CPU lcores: 48 00:04:58.196 EAL: Detected NUMA nodes: 2 00:04:58.196 EAL: Detected shared linkage of DPDK 00:04:58.196 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.196 EAL: Selected IOVA mode 'VA' 00:04:58.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.196 EAL: VFIO support initialized 00:04:58.196 00:04:58.196 00:04:58.196 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.196 http://cunit.sourceforge.net/ 00:04:58.196 00:04:58.196 00:04:58.196 Suite: memory 00:04:58.196 Test: test ... 00:04:58.196 register 0x200000200000 2097152 00:04:58.196 malloc 3145728 00:04:58.196 register 0x200000400000 4194304 00:04:58.196 buf 0x200000500000 len 3145728 PASSED 00:04:58.196 malloc 64 00:04:58.196 buf 0x2000004fff40 len 64 PASSED 00:04:58.196 malloc 4194304 00:04:58.196 register 0x200000800000 6291456 00:04:58.196 buf 0x200000a00000 len 4194304 PASSED 00:04:58.196 free 0x200000500000 3145728 00:04:58.196 free 0x2000004fff40 64 00:04:58.196 unregister 0x200000400000 4194304 PASSED 00:04:58.196 free 0x200000a00000 4194304 00:04:58.196 unregister 0x200000800000 6291456 PASSED 00:04:58.196 malloc 8388608 00:04:58.196 register 0x200000400000 10485760 00:04:58.196 buf 0x200000600000 len 8388608 PASSED 00:04:58.196 free 0x200000600000 8388608 00:04:58.196 unregister 0x200000400000 10485760 PASSED 00:04:58.196 passed 00:04:58.196 00:04:58.196 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.196 suites 1 1 n/a 0 0 00:04:58.196 tests 1 1 1 0 0 00:04:58.196 asserts 15 15 15 0 n/a 00:04:58.196 00:04:58.196 Elapsed time = 0.005 seconds 00:04:58.196 00:04:58.196 real 0m0.050s 00:04:58.196 user 0m0.021s 00:04:58.196 sys 0m0.028s 00:04:58.196 13:12:55 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.196 13:12:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.196 ************************************ 00:04:58.196 END TEST env_mem_callbacks 00:04:58.196 ************************************ 00:04:58.196 13:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.196 00:04:58.196 real 0m6.293s 00:04:58.196 user 0m4.310s 00:04:58.196 sys 0m1.022s 00:04:58.196 13:12:55 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.196 13:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.196 ************************************ 00:04:58.196 END TEST env 00:04:58.196 ************************************ 00:04:58.196 13:12:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.196 13:12:55 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.196 13:12:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.196 13:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.196 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.196 ************************************ 00:04:58.196 START TEST rpc 00:04:58.196 ************************************ 00:04:58.196 13:12:55 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.454 * Looking for test storage... 00:04:58.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.454 13:12:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3441499 00:04:58.454 13:12:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:58.454 13:12:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.454 13:12:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3441499 00:04:58.454 13:12:55 rpc -- common/autotest_common.sh@829 -- # '[' -z 3441499 ']' 00:04:58.454 13:12:55 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.454 13:12:55 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.454 13:12:55 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.454 13:12:55 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.455 13:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.455 [2024-07-12 13:12:55.770536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:04:58.455 [2024-07-12 13:12:55.770631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441499 ] 00:04:58.455 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.455 [2024-07-12 13:12:55.802190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:58.455 [2024-07-12 13:12:55.828766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.455 [2024-07-12 13:12:55.912905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:58.455 [2024-07-12 13:12:55.912956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3441499' to capture a snapshot of events at runtime. 00:04:58.455 [2024-07-12 13:12:55.912980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.455 [2024-07-12 13:12:55.912990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.455 [2024-07-12 13:12:55.913000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3441499 for offline analysis/debug. 00:04:58.455 [2024-07-12 13:12:55.913026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.713 13:12:56 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.713 13:12:56 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:58.713 13:12:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.713 13:12:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.713 13:12:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.713 13:12:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.713 13:12:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.713 13:12:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.713 13:12:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.713 ************************************ 00:04:58.713 START TEST rpc_integrity 00:04:58.713 ************************************ 00:04:58.713 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:58.713 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.713 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.713 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.713 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.713 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.713 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.971 { 00:04:58.971 "name": "Malloc0", 00:04:58.971 "aliases": [ 00:04:58.971 "5e81fbb9-3b05-4324-958b-68c2df0ddcb5" 00:04:58.971 ], 00:04:58.971 "product_name": "Malloc disk", 00:04:58.971 "block_size": 512, 00:04:58.971 "num_blocks": 16384, 00:04:58.971 "uuid": "5e81fbb9-3b05-4324-958b-68c2df0ddcb5", 00:04:58.971 "assigned_rate_limits": { 00:04:58.971 "rw_ios_per_sec": 0, 00:04:58.971 "rw_mbytes_per_sec": 0, 00:04:58.971 "r_mbytes_per_sec": 0, 00:04:58.971 "w_mbytes_per_sec": 0 00:04:58.971 }, 00:04:58.971 "claimed": false, 00:04:58.971 "zoned": false, 00:04:58.971 "supported_io_types": { 00:04:58.971 "read": true, 00:04:58.971 "write": true, 00:04:58.971 "unmap": true, 00:04:58.971 "flush": true, 00:04:58.971 "reset": true, 00:04:58.971 "nvme_admin": false, 00:04:58.971 "nvme_io": false, 00:04:58.971 "nvme_io_md": false, 00:04:58.971 "write_zeroes": true, 00:04:58.971 "zcopy": true, 00:04:58.971 "get_zone_info": false, 00:04:58.971 "zone_management": false, 00:04:58.971 "zone_append": false, 00:04:58.971 "compare": false, 00:04:58.971 "compare_and_write": false, 00:04:58.971 "abort": true, 00:04:58.971 "seek_hole": false, 00:04:58.971 "seek_data": false, 00:04:58.971 "copy": true, 00:04:58.971 "nvme_iov_md": false 00:04:58.971 }, 00:04:58.971 "memory_domains": [ 00:04:58.971 { 00:04:58.971 "dma_device_id": "system", 00:04:58.971 "dma_device_type": 1 00:04:58.971 }, 00:04:58.971 { 00:04:58.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.971 "dma_device_type": 2 00:04:58.971 } 00:04:58.971 ], 00:04:58.971 "driver_specific": {} 00:04:58.971 } 00:04:58.971 ]' 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.971 [2024-07-12 13:12:56.275128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.971 [2024-07-12 13:12:56.275167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.971 [2024-07-12 13:12:56.275188] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fc07f0 00:04:58.971 [2024-07-12 13:12:56.275200] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.971 [2024-07-12 13:12:56.276541] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.971 [2024-07-12 13:12:56.276565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.971 Passthru0 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.971 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.971 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.971 { 00:04:58.971 "name": "Malloc0", 00:04:58.971 "aliases": [ 00:04:58.971 "5e81fbb9-3b05-4324-958b-68c2df0ddcb5" 00:04:58.971 ], 00:04:58.971 "product_name": "Malloc disk", 00:04:58.971 "block_size": 512, 00:04:58.971 "num_blocks": 16384, 00:04:58.971 "uuid": "5e81fbb9-3b05-4324-958b-68c2df0ddcb5", 00:04:58.971 "assigned_rate_limits": { 00:04:58.971 "rw_ios_per_sec": 0, 00:04:58.971 "rw_mbytes_per_sec": 0, 00:04:58.971 "r_mbytes_per_sec": 0, 00:04:58.971 "w_mbytes_per_sec": 0 00:04:58.971 }, 00:04:58.971 "claimed": true, 00:04:58.971 "claim_type": "exclusive_write", 00:04:58.971 "zoned": false, 00:04:58.971 "supported_io_types": { 00:04:58.971 "read": true, 00:04:58.971 "write": true, 00:04:58.971 "unmap": true, 00:04:58.971 "flush": true, 00:04:58.971 "reset": true, 00:04:58.971 "nvme_admin": false, 00:04:58.971 "nvme_io": false, 00:04:58.971 "nvme_io_md": false, 00:04:58.971 "write_zeroes": true, 00:04:58.971 "zcopy": true, 00:04:58.971 "get_zone_info": false, 00:04:58.971 "zone_management": false, 00:04:58.971 "zone_append": false, 00:04:58.971 "compare": false, 00:04:58.971 "compare_and_write": false, 00:04:58.971 "abort": true, 00:04:58.971 "seek_hole": false, 00:04:58.971 "seek_data": false, 00:04:58.971 "copy": true, 00:04:58.971 "nvme_iov_md": false 00:04:58.971 }, 00:04:58.971 "memory_domains": [ 00:04:58.971 { 00:04:58.971 "dma_device_id": "system", 00:04:58.971 "dma_device_type": 1 00:04:58.971 }, 00:04:58.971 { 00:04:58.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.971 "dma_device_type": 2 00:04:58.971 } 00:04:58.971 ], 00:04:58.971 "driver_specific": {} 00:04:58.971 }, 00:04:58.971 { 00:04:58.971 "name": "Passthru0", 00:04:58.971 "aliases": [ 00:04:58.971 "f61a1b40-d179-5800-8ddb-a07f4b3e6b6b" 00:04:58.971 ], 00:04:58.971 "product_name": "passthru", 00:04:58.971 "block_size": 512, 00:04:58.971 "num_blocks": 16384, 00:04:58.971 "uuid": "f61a1b40-d179-5800-8ddb-a07f4b3e6b6b", 00:04:58.971 "assigned_rate_limits": { 00:04:58.971 "rw_ios_per_sec": 0, 00:04:58.971 "rw_mbytes_per_sec": 0, 00:04:58.971 "r_mbytes_per_sec": 0, 00:04:58.971 "w_mbytes_per_sec": 0 00:04:58.972 }, 00:04:58.972 "claimed": false, 00:04:58.972 "zoned": false, 00:04:58.972 "supported_io_types": { 00:04:58.972 "read": true, 00:04:58.972 "write": true, 00:04:58.972 "unmap": true, 00:04:58.972 "flush": true, 00:04:58.972 "reset": true, 00:04:58.972 "nvme_admin": false, 00:04:58.972 "nvme_io": false, 00:04:58.972 "nvme_io_md": false, 00:04:58.972 "write_zeroes": true, 00:04:58.972 "zcopy": true, 00:04:58.972 "get_zone_info": false, 00:04:58.972 "zone_management": false, 00:04:58.972 "zone_append": false, 00:04:58.972 "compare": false, 00:04:58.972 "compare_and_write": false, 00:04:58.972 "abort": true, 00:04:58.972 "seek_hole": false, 00:04:58.972 "seek_data": false, 00:04:58.972 "copy": true, 00:04:58.972 "nvme_iov_md": false 00:04:58.972 }, 00:04:58.972 "memory_domains": [ 00:04:58.972 { 00:04:58.972 "dma_device_id": "system", 00:04:58.972 "dma_device_type": 1 00:04:58.972 }, 00:04:58.972 { 00:04:58.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.972 "dma_device_type": 2 00:04:58.972 } 00:04:58.972 ], 00:04:58.972 "driver_specific": { 00:04:58.972 "passthru": { 00:04:58.972 "name": "Passthru0", 00:04:58.972 "base_bdev_name": "Malloc0" 00:04:58.972 } 00:04:58.972 } 00:04:58.972 } 00:04:58.972 ]' 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.972 13:12:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.972 00:04:58.972 real 0m0.223s 00:04:58.972 user 0m0.143s 00:04:58.972 sys 0m0.023s 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.972 13:12:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.972 ************************************ 00:04:58.972 END TEST rpc_integrity 00:04:58.972 ************************************ 00:04:58.972 13:12:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.972 13:12:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.972 13:12:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.972 13:12:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.972 13:12:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.972 ************************************ 00:04:58.972 START TEST rpc_plugins 00:04:58.972 ************************************ 00:04:58.972 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:58.972 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.972 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.972 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.230 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:59.230 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.230 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.230 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:59.230 { 00:04:59.230 "name": "Malloc1", 00:04:59.230 "aliases": [ 00:04:59.230 "60e73282-4d71-44fa-917b-959e60aba527" 00:04:59.230 ], 00:04:59.230 "product_name": "Malloc disk", 00:04:59.230 "block_size": 4096, 00:04:59.230 "num_blocks": 256, 00:04:59.230 "uuid": "60e73282-4d71-44fa-917b-959e60aba527", 00:04:59.230 "assigned_rate_limits": { 00:04:59.230 "rw_ios_per_sec": 0, 00:04:59.230 "rw_mbytes_per_sec": 0, 00:04:59.230 "r_mbytes_per_sec": 0, 00:04:59.230 "w_mbytes_per_sec": 0 00:04:59.230 }, 00:04:59.230 "claimed": false, 00:04:59.230 "zoned": false, 00:04:59.230 "supported_io_types": { 00:04:59.230 "read": true, 00:04:59.230 "write": true, 00:04:59.230 "unmap": true, 00:04:59.230 "flush": true, 00:04:59.230 "reset": true, 00:04:59.230 "nvme_admin": false, 00:04:59.230 "nvme_io": false, 00:04:59.230 "nvme_io_md": false, 00:04:59.230 "write_zeroes": true, 00:04:59.230 "zcopy": true, 00:04:59.230 "get_zone_info": false, 00:04:59.230 "zone_management": false, 00:04:59.230 "zone_append": false, 00:04:59.230 "compare": false, 00:04:59.230 "compare_and_write": false, 00:04:59.230 "abort": true, 00:04:59.230 "seek_hole": false, 00:04:59.230 "seek_data": false, 00:04:59.230 "copy": true, 00:04:59.230 "nvme_iov_md": false 00:04:59.230 }, 00:04:59.230 "memory_domains": [ 00:04:59.230 { 00:04:59.230 "dma_device_id": "system", 00:04:59.230 "dma_device_type": 1 00:04:59.230 }, 00:04:59.230 { 00:04:59.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.230 "dma_device_type": 2 00:04:59.230 } 00:04:59.230 ], 00:04:59.230 "driver_specific": {} 00:04:59.230 } 00:04:59.230 ]' 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.230 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.230 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.231 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.231 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.231 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.231 13:12:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.231 00:04:59.231 real 0m0.105s 00:04:59.231 user 0m0.064s 00:04:59.231 sys 0m0.012s 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.231 13:12:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.231 ************************************ 00:04:59.231 END TEST rpc_plugins 00:04:59.231 ************************************ 00:04:59.231 13:12:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.231 13:12:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.231 13:12:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.231 13:12:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.231 13:12:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.231 ************************************ 00:04:59.231 START TEST rpc_trace_cmd_test 00:04:59.231 ************************************ 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.231 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3441499", 00:04:59.231 "tpoint_group_mask": "0x8", 00:04:59.231 "iscsi_conn": { 00:04:59.231 "mask": "0x2", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "scsi": { 00:04:59.231 "mask": "0x4", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "bdev": { 00:04:59.231 "mask": "0x8", 00:04:59.231 "tpoint_mask": "0xffffffffffffffff" 00:04:59.231 }, 00:04:59.231 "nvmf_rdma": { 00:04:59.231 "mask": "0x10", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "nvmf_tcp": { 00:04:59.231 "mask": "0x20", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "ftl": { 00:04:59.231 "mask": "0x40", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "blobfs": { 00:04:59.231 "mask": "0x80", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "dsa": { 00:04:59.231 "mask": "0x200", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "thread": { 00:04:59.231 "mask": "0x400", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "nvme_pcie": { 00:04:59.231 "mask": "0x800", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "iaa": { 00:04:59.231 "mask": "0x1000", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "nvme_tcp": { 00:04:59.231 "mask": "0x2000", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "bdev_nvme": { 00:04:59.231 "mask": "0x4000", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 }, 00:04:59.231 "sock": { 00:04:59.231 "mask": "0x8000", 00:04:59.231 "tpoint_mask": "0x0" 00:04:59.231 } 00:04:59.231 }' 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.231 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.490 00:04:59.490 real 0m0.178s 00:04:59.490 user 0m0.157s 00:04:59.490 sys 0m0.014s 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.490 13:12:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 ************************************ 00:04:59.490 END TEST rpc_trace_cmd_test 00:04:59.490 ************************************ 00:04:59.490 13:12:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.490 13:12:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.490 13:12:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.490 13:12:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.490 13:12:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.490 13:12:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.490 13:12:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 ************************************ 00:04:59.490 START TEST rpc_daemon_integrity 00:04:59.490 ************************************ 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.490 { 00:04:59.490 "name": "Malloc2", 00:04:59.490 "aliases": [ 00:04:59.490 "571d294d-273c-4e0d-bba5-c20cc8260d4d" 00:04:59.490 ], 00:04:59.490 "product_name": "Malloc disk", 00:04:59.490 "block_size": 512, 00:04:59.490 "num_blocks": 16384, 00:04:59.490 "uuid": "571d294d-273c-4e0d-bba5-c20cc8260d4d", 00:04:59.490 "assigned_rate_limits": { 00:04:59.490 "rw_ios_per_sec": 0, 00:04:59.490 "rw_mbytes_per_sec": 0, 00:04:59.490 "r_mbytes_per_sec": 0, 00:04:59.490 "w_mbytes_per_sec": 0 00:04:59.490 }, 00:04:59.490 "claimed": false, 00:04:59.490 "zoned": false, 00:04:59.490 "supported_io_types": { 00:04:59.490 "read": true, 00:04:59.490 "write": true, 00:04:59.490 "unmap": true, 00:04:59.490 "flush": true, 00:04:59.490 "reset": true, 00:04:59.490 "nvme_admin": false, 00:04:59.490 "nvme_io": false, 00:04:59.490 "nvme_io_md": false, 00:04:59.490 "write_zeroes": true, 00:04:59.490 "zcopy": true, 00:04:59.490 "get_zone_info": false, 00:04:59.490 "zone_management": false, 00:04:59.490 "zone_append": false, 00:04:59.490 "compare": false, 00:04:59.490 "compare_and_write": false, 00:04:59.490 "abort": true, 00:04:59.490 "seek_hole": false, 00:04:59.490 "seek_data": false, 00:04:59.490 "copy": true, 00:04:59.490 "nvme_iov_md": false 00:04:59.490 }, 00:04:59.490 "memory_domains": [ 00:04:59.490 { 00:04:59.490 "dma_device_id": "system", 00:04:59.490 "dma_device_type": 1 00:04:59.490 }, 00:04:59.490 { 00:04:59.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.490 "dma_device_type": 2 00:04:59.490 } 00:04:59.490 ], 00:04:59.490 "driver_specific": {} 00:04:59.490 } 00:04:59.490 ]' 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.490 [2024-07-12 13:12:56.916939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.490 [2024-07-12 13:12:56.916978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.490 [2024-07-12 13:12:56.917000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2164490 00:04:59.490 [2024-07-12 13:12:56.917011] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.490 [2024-07-12 13:12:56.918124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.490 [2024-07-12 13:12:56.918147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.490 Passthru0 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.490 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.491 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.491 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.491 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.491 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.491 { 00:04:59.491 "name": "Malloc2", 00:04:59.491 "aliases": [ 00:04:59.491 "571d294d-273c-4e0d-bba5-c20cc8260d4d" 00:04:59.491 ], 00:04:59.491 "product_name": "Malloc disk", 00:04:59.491 "block_size": 512, 00:04:59.491 "num_blocks": 16384, 00:04:59.491 "uuid": "571d294d-273c-4e0d-bba5-c20cc8260d4d", 00:04:59.491 "assigned_rate_limits": { 00:04:59.491 "rw_ios_per_sec": 0, 00:04:59.491 "rw_mbytes_per_sec": 0, 00:04:59.491 "r_mbytes_per_sec": 0, 00:04:59.491 "w_mbytes_per_sec": 0 00:04:59.491 }, 00:04:59.491 "claimed": true, 00:04:59.491 "claim_type": "exclusive_write", 00:04:59.491 "zoned": false, 00:04:59.491 "supported_io_types": { 00:04:59.491 "read": true, 00:04:59.491 "write": true, 00:04:59.491 "unmap": true, 00:04:59.491 "flush": true, 00:04:59.491 "reset": true, 00:04:59.491 "nvme_admin": false, 00:04:59.491 "nvme_io": false, 00:04:59.491 "nvme_io_md": false, 00:04:59.491 "write_zeroes": true, 00:04:59.491 "zcopy": true, 00:04:59.491 "get_zone_info": false, 00:04:59.491 "zone_management": false, 00:04:59.491 "zone_append": false, 00:04:59.491 "compare": false, 00:04:59.491 "compare_and_write": false, 00:04:59.491 "abort": true, 00:04:59.491 "seek_hole": false, 00:04:59.491 "seek_data": false, 00:04:59.491 "copy": true, 00:04:59.491 "nvme_iov_md": false 00:04:59.491 }, 00:04:59.491 "memory_domains": [ 00:04:59.491 { 00:04:59.491 "dma_device_id": "system", 00:04:59.491 "dma_device_type": 1 00:04:59.491 }, 00:04:59.491 { 00:04:59.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.491 "dma_device_type": 2 00:04:59.491 } 00:04:59.491 ], 00:04:59.491 "driver_specific": {} 00:04:59.491 }, 00:04:59.491 { 00:04:59.491 "name": "Passthru0", 00:04:59.491 "aliases": [ 00:04:59.491 "ea02a576-3cde-5111-acdd-5c765a4e1636" 00:04:59.491 ], 00:04:59.491 "product_name": "passthru", 00:04:59.491 "block_size": 512, 00:04:59.491 "num_blocks": 16384, 00:04:59.491 "uuid": "ea02a576-3cde-5111-acdd-5c765a4e1636", 00:04:59.491 "assigned_rate_limits": { 00:04:59.491 "rw_ios_per_sec": 0, 00:04:59.491 "rw_mbytes_per_sec": 0, 00:04:59.491 "r_mbytes_per_sec": 0, 00:04:59.491 "w_mbytes_per_sec": 0 00:04:59.491 }, 00:04:59.491 "claimed": false, 00:04:59.491 "zoned": false, 00:04:59.491 "supported_io_types": { 00:04:59.491 "read": true, 00:04:59.491 "write": true, 00:04:59.491 "unmap": true, 00:04:59.491 "flush": true, 00:04:59.491 "reset": true, 00:04:59.491 "nvme_admin": false, 00:04:59.491 "nvme_io": false, 00:04:59.491 "nvme_io_md": false, 00:04:59.491 "write_zeroes": true, 00:04:59.491 "zcopy": true, 00:04:59.491 "get_zone_info": false, 00:04:59.491 "zone_management": false, 00:04:59.491 "zone_append": false, 00:04:59.491 "compare": false, 00:04:59.491 "compare_and_write": false, 00:04:59.491 "abort": true, 00:04:59.491 "seek_hole": false, 00:04:59.491 "seek_data": false, 00:04:59.491 "copy": true, 00:04:59.491 "nvme_iov_md": false 00:04:59.491 }, 00:04:59.491 "memory_domains": [ 00:04:59.491 { 00:04:59.491 "dma_device_id": "system", 00:04:59.491 "dma_device_type": 1 00:04:59.491 }, 00:04:59.491 { 00:04:59.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.491 "dma_device_type": 2 00:04:59.491 } 00:04:59.491 ], 00:04:59.491 "driver_specific": { 00:04:59.491 "passthru": { 00:04:59.491 "name": "Passthru0", 00:04:59.491 "base_bdev_name": "Malloc2" 00:04:59.491 } 00:04:59.491 } 00:04:59.491 } 00:04:59.491 ]' 00:04:59.491 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.749 13:12:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.749 13:12:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.749 00:04:59.749 real 0m0.211s 00:04:59.749 user 0m0.129s 00:04:59.749 sys 0m0.026s 00:04:59.749 13:12:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.749 13:12:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 ************************************ 00:04:59.749 END TEST rpc_daemon_integrity 00:04:59.749 ************************************ 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.749 13:12:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.749 13:12:57 rpc -- rpc/rpc.sh@84 -- # killprocess 3441499 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@948 -- # '[' -z 3441499 ']' 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@952 -- # kill -0 3441499 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@953 -- # uname 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441499 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441499' 00:04:59.749 killing process with pid 3441499 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@967 -- # kill 3441499 00:04:59.749 13:12:57 rpc -- common/autotest_common.sh@972 -- # wait 3441499 00:05:00.008 00:05:00.008 real 0m1.794s 00:05:00.008 user 0m2.221s 00:05:00.008 sys 0m0.592s 00:05:00.008 13:12:57 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.008 13:12:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.008 ************************************ 00:05:00.008 END TEST rpc 00:05:00.008 ************************************ 00:05:00.267 13:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.267 13:12:57 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.267 13:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.267 13:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.267 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.267 ************************************ 00:05:00.267 START TEST skip_rpc 00:05:00.267 ************************************ 00:05:00.267 13:12:57 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:00.267 * Looking for test storage... 00:05:00.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.267 13:12:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.267 13:12:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.267 13:12:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.267 13:12:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.267 13:12:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.267 13:12:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.267 ************************************ 00:05:00.267 START TEST skip_rpc 00:05:00.267 ************************************ 00:05:00.267 13:12:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:00.267 13:12:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3441932 00:05:00.267 13:12:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.267 13:12:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.267 13:12:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.267 [2024-07-12 13:12:57.634222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:00.267 [2024-07-12 13:12:57.634285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441932 ] 00:05:00.267 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.267 [2024-07-12 13:12:57.664912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.267 [2024-07-12 13:12:57.690084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.525 [2024-07-12 13:12:57.775135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3441932 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3441932 ']' 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3441932 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441932 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441932' 00:05:05.787 killing process with pid 3441932 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3441932 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3441932 00:05:05.787 00:05:05.787 real 0m5.417s 00:05:05.787 user 0m5.116s 00:05:05.787 sys 0m0.303s 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.787 13:13:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.787 ************************************ 00:05:05.787 END TEST skip_rpc 00:05:05.787 ************************************ 00:05:05.787 13:13:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.787 13:13:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.787 13:13:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.787 13:13:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.787 13:13:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.787 ************************************ 00:05:05.787 START TEST skip_rpc_with_json 00:05:05.787 ************************************ 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3442622 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3442622 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3442622 ']' 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.787 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.787 [2024-07-12 13:13:03.100508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:05.787 [2024-07-12 13:13:03.100584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442622 ] 00:05:05.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.787 [2024-07-12 13:13:03.132061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.787 [2024-07-12 13:13:03.157499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.787 [2024-07-12 13:13:03.246097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.044 [2024-07-12 13:13:03.487060] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.044 request: 00:05:06.044 { 00:05:06.044 "trtype": "tcp", 00:05:06.044 "method": "nvmf_get_transports", 00:05:06.044 "req_id": 1 00:05:06.044 } 00:05:06.044 Got JSON-RPC error response 00:05:06.044 response: 00:05:06.044 { 00:05:06.044 "code": -19, 00:05:06.044 "message": "No such device" 00:05:06.044 } 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.044 [2024-07-12 13:13:03.495163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.044 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.302 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.302 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.302 { 00:05:06.302 "subsystems": [ 00:05:06.302 { 00:05:06.302 "subsystem": "vfio_user_target", 00:05:06.302 "config": null 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "keyring", 00:05:06.302 "config": [] 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "iobuf", 00:05:06.302 "config": [ 00:05:06.302 { 00:05:06.302 "method": "iobuf_set_options", 00:05:06.302 "params": { 00:05:06.302 "small_pool_count": 8192, 00:05:06.302 "large_pool_count": 1024, 00:05:06.302 "small_bufsize": 8192, 00:05:06.302 "large_bufsize": 135168 00:05:06.302 } 00:05:06.302 } 00:05:06.302 ] 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "sock", 00:05:06.302 "config": [ 00:05:06.302 { 00:05:06.302 "method": "sock_set_default_impl", 00:05:06.302 "params": { 00:05:06.302 "impl_name": "posix" 00:05:06.302 } 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "method": "sock_impl_set_options", 00:05:06.302 "params": { 00:05:06.302 "impl_name": "ssl", 00:05:06.302 "recv_buf_size": 4096, 00:05:06.302 "send_buf_size": 4096, 00:05:06.302 "enable_recv_pipe": true, 00:05:06.302 "enable_quickack": false, 00:05:06.302 "enable_placement_id": 0, 00:05:06.302 "enable_zerocopy_send_server": true, 00:05:06.302 "enable_zerocopy_send_client": false, 00:05:06.302 "zerocopy_threshold": 0, 00:05:06.302 "tls_version": 0, 00:05:06.302 "enable_ktls": false 00:05:06.302 } 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "method": "sock_impl_set_options", 00:05:06.302 "params": { 00:05:06.302 "impl_name": "posix", 00:05:06.302 "recv_buf_size": 2097152, 00:05:06.302 "send_buf_size": 2097152, 00:05:06.302 "enable_recv_pipe": true, 00:05:06.302 "enable_quickack": false, 00:05:06.302 "enable_placement_id": 0, 00:05:06.302 "enable_zerocopy_send_server": true, 00:05:06.302 "enable_zerocopy_send_client": false, 00:05:06.302 "zerocopy_threshold": 0, 00:05:06.302 "tls_version": 0, 00:05:06.302 "enable_ktls": false 00:05:06.302 } 00:05:06.302 } 00:05:06.302 ] 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "vmd", 00:05:06.302 "config": [] 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "accel", 00:05:06.302 "config": [ 00:05:06.302 { 00:05:06.302 "method": "accel_set_options", 00:05:06.302 "params": { 00:05:06.302 "small_cache_size": 128, 00:05:06.302 "large_cache_size": 16, 00:05:06.302 "task_count": 2048, 00:05:06.302 "sequence_count": 2048, 00:05:06.302 "buf_count": 2048 00:05:06.302 } 00:05:06.302 } 00:05:06.302 ] 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "subsystem": "bdev", 00:05:06.302 "config": [ 00:05:06.302 { 00:05:06.302 "method": "bdev_set_options", 00:05:06.302 "params": { 00:05:06.302 "bdev_io_pool_size": 65535, 00:05:06.302 "bdev_io_cache_size": 256, 00:05:06.302 "bdev_auto_examine": true, 00:05:06.302 "iobuf_small_cache_size": 128, 00:05:06.302 "iobuf_large_cache_size": 16 00:05:06.302 } 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "method": "bdev_raid_set_options", 00:05:06.302 "params": { 00:05:06.302 "process_window_size_kb": 1024 00:05:06.302 } 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "method": "bdev_iscsi_set_options", 00:05:06.302 "params": { 00:05:06.302 "timeout_sec": 30 00:05:06.302 } 00:05:06.302 }, 00:05:06.302 { 00:05:06.302 "method": "bdev_nvme_set_options", 00:05:06.302 "params": { 00:05:06.302 "action_on_timeout": "none", 00:05:06.302 "timeout_us": 0, 00:05:06.302 "timeout_admin_us": 0, 00:05:06.302 "keep_alive_timeout_ms": 10000, 00:05:06.302 "arbitration_burst": 0, 00:05:06.302 "low_priority_weight": 0, 00:05:06.302 "medium_priority_weight": 0, 00:05:06.302 "high_priority_weight": 0, 00:05:06.302 "nvme_adminq_poll_period_us": 10000, 00:05:06.302 "nvme_ioq_poll_period_us": 0, 00:05:06.302 "io_queue_requests": 0, 00:05:06.302 "delay_cmd_submit": true, 00:05:06.302 "transport_retry_count": 4, 00:05:06.302 "bdev_retry_count": 3, 00:05:06.302 "transport_ack_timeout": 0, 00:05:06.302 "ctrlr_loss_timeout_sec": 0, 00:05:06.302 "reconnect_delay_sec": 0, 00:05:06.302 "fast_io_fail_timeout_sec": 0, 00:05:06.302 "disable_auto_failback": false, 00:05:06.302 "generate_uuids": false, 00:05:06.302 "transport_tos": 0, 00:05:06.302 "nvme_error_stat": false, 00:05:06.302 "rdma_srq_size": 0, 00:05:06.302 "io_path_stat": false, 00:05:06.302 "allow_accel_sequence": false, 00:05:06.302 "rdma_max_cq_size": 0, 00:05:06.302 "rdma_cm_event_timeout_ms": 0, 00:05:06.302 "dhchap_digests": [ 00:05:06.302 "sha256", 00:05:06.302 "sha384", 00:05:06.302 "sha512" 00:05:06.302 ], 00:05:06.302 "dhchap_dhgroups": [ 00:05:06.302 "null", 00:05:06.302 "ffdhe2048", 00:05:06.302 "ffdhe3072", 00:05:06.302 "ffdhe4096", 00:05:06.302 "ffdhe6144", 00:05:06.302 "ffdhe8192" 00:05:06.302 ] 00:05:06.303 } 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "method": "bdev_nvme_set_hotplug", 00:05:06.303 "params": { 00:05:06.303 "period_us": 100000, 00:05:06.303 "enable": false 00:05:06.303 } 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "method": "bdev_wait_for_examine" 00:05:06.303 } 00:05:06.303 ] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "scsi", 00:05:06.303 "config": null 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "scheduler", 00:05:06.303 "config": [ 00:05:06.303 { 00:05:06.303 "method": "framework_set_scheduler", 00:05:06.303 "params": { 00:05:06.303 "name": "static" 00:05:06.303 } 00:05:06.303 } 00:05:06.303 ] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "vhost_scsi", 00:05:06.303 "config": [] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "vhost_blk", 00:05:06.303 "config": [] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "ublk", 00:05:06.303 "config": [] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "nbd", 00:05:06.303 "config": [] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "nvmf", 00:05:06.303 "config": [ 00:05:06.303 { 00:05:06.303 "method": "nvmf_set_config", 00:05:06.303 "params": { 00:05:06.303 "discovery_filter": "match_any", 00:05:06.303 "admin_cmd_passthru": { 00:05:06.303 "identify_ctrlr": false 00:05:06.303 } 00:05:06.303 } 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "method": "nvmf_set_max_subsystems", 00:05:06.303 "params": { 00:05:06.303 "max_subsystems": 1024 00:05:06.303 } 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "method": "nvmf_set_crdt", 00:05:06.303 "params": { 00:05:06.303 "crdt1": 0, 00:05:06.303 "crdt2": 0, 00:05:06.303 "crdt3": 0 00:05:06.303 } 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "method": "nvmf_create_transport", 00:05:06.303 "params": { 00:05:06.303 "trtype": "TCP", 00:05:06.303 "max_queue_depth": 128, 00:05:06.303 "max_io_qpairs_per_ctrlr": 127, 00:05:06.303 "in_capsule_data_size": 4096, 00:05:06.303 "max_io_size": 131072, 00:05:06.303 "io_unit_size": 131072, 00:05:06.303 "max_aq_depth": 128, 00:05:06.303 "num_shared_buffers": 511, 00:05:06.303 "buf_cache_size": 4294967295, 00:05:06.303 "dif_insert_or_strip": false, 00:05:06.303 "zcopy": false, 00:05:06.303 "c2h_success": true, 00:05:06.303 "sock_priority": 0, 00:05:06.303 "abort_timeout_sec": 1, 00:05:06.303 "ack_timeout": 0, 00:05:06.303 "data_wr_pool_size": 0 00:05:06.303 } 00:05:06.303 } 00:05:06.303 ] 00:05:06.303 }, 00:05:06.303 { 00:05:06.303 "subsystem": "iscsi", 00:05:06.303 "config": [ 00:05:06.303 { 00:05:06.303 "method": "iscsi_set_options", 00:05:06.303 "params": { 00:05:06.303 "node_base": "iqn.2016-06.io.spdk", 00:05:06.303 "max_sessions": 128, 00:05:06.303 "max_connections_per_session": 2, 00:05:06.303 "max_queue_depth": 64, 00:05:06.303 "default_time2wait": 2, 00:05:06.303 "default_time2retain": 20, 00:05:06.303 "first_burst_length": 8192, 00:05:06.303 "immediate_data": true, 00:05:06.303 "allow_duplicated_isid": false, 00:05:06.303 "error_recovery_level": 0, 00:05:06.303 "nop_timeout": 60, 00:05:06.303 "nop_in_interval": 30, 00:05:06.303 "disable_chap": false, 00:05:06.303 "require_chap": false, 00:05:06.303 "mutual_chap": false, 00:05:06.303 "chap_group": 0, 00:05:06.303 "max_large_datain_per_connection": 64, 00:05:06.303 "max_r2t_per_connection": 4, 00:05:06.303 "pdu_pool_size": 36864, 00:05:06.303 "immediate_data_pool_size": 16384, 00:05:06.303 "data_out_pool_size": 2048 00:05:06.303 } 00:05:06.303 } 00:05:06.303 ] 00:05:06.303 } 00:05:06.303 ] 00:05:06.303 } 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3442622 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3442622 ']' 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3442622 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3442622 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3442622' 00:05:06.303 killing process with pid 3442622 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3442622 00:05:06.303 13:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3442622 00:05:06.869 13:13:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3442760 00:05:06.869 13:13:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.869 13:13:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3442760 ']' 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3442760' 00:05:12.180 killing process with pid 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3442760 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.180 00:05:12.180 real 0m6.450s 00:05:12.180 user 0m6.050s 00:05:12.180 sys 0m0.671s 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 ************************************ 00:05:12.180 END TEST skip_rpc_with_json 00:05:12.180 ************************************ 00:05:12.180 13:13:09 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.180 13:13:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:12.180 13:13:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.180 13:13:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.180 13:13:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.180 ************************************ 00:05:12.180 START TEST skip_rpc_with_delay 00:05:12.180 ************************************ 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.180 [2024-07-12 13:13:09.598754] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.180 [2024-07-12 13:13:09.598884] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.180 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.180 00:05:12.180 real 0m0.069s 00:05:12.180 user 0m0.046s 00:05:12.180 sys 0m0.023s 00:05:12.181 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.181 13:13:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.181 ************************************ 00:05:12.181 END TEST skip_rpc_with_delay 00:05:12.181 ************************************ 00:05:12.441 13:13:09 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.441 13:13:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.441 13:13:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.441 13:13:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.441 13:13:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.441 13:13:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.441 13:13:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.441 ************************************ 00:05:12.441 START TEST exit_on_failed_rpc_init 00:05:12.441 ************************************ 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3443474 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3443474 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3443474 ']' 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.441 13:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.441 [2024-07-12 13:13:09.709624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:12.441 [2024-07-12 13:13:09.709724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443474 ] 00:05:12.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.441 [2024-07-12 13:13:09.740868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:12.441 [2024-07-12 13:13:09.766665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.441 [2024-07-12 13:13:09.849757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.698 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.698 [2024-07-12 13:13:10.145845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:12.698 [2024-07-12 13:13:10.145938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443491 ] 00:05:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.955 [2024-07-12 13:13:10.176216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:12.955 [2024-07-12 13:13:10.203138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.955 [2024-07-12 13:13:10.290208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.955 [2024-07-12 13:13:10.290306] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:12.955 [2024-07-12 13:13:10.290344] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:12.955 [2024-07-12 13:13:10.290357] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3443474 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3443474 ']' 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3443474 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3443474 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3443474' 00:05:12.955 killing process with pid 3443474 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3443474 00:05:12.955 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3443474 00:05:13.521 00:05:13.521 real 0m1.144s 00:05:13.521 user 0m1.236s 00:05:13.521 sys 0m0.446s 00:05:13.521 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.521 13:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 END TEST exit_on_failed_rpc_init 00:05:13.521 ************************************ 00:05:13.521 13:13:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.521 13:13:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.521 00:05:13.521 real 0m13.324s 00:05:13.521 user 0m12.539s 00:05:13.521 sys 0m1.611s 00:05:13.521 13:13:10 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.521 13:13:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 END TEST skip_rpc 00:05:13.521 ************************************ 00:05:13.521 13:13:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.521 13:13:10 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.521 13:13:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.521 13:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.521 13:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 START TEST rpc_client 00:05:13.521 ************************************ 00:05:13.521 13:13:10 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.521 * Looking for test storage... 00:05:13.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:13.521 13:13:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:13.521 OK 00:05:13.521 13:13:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.521 00:05:13.521 real 0m0.071s 00:05:13.521 user 0m0.031s 00:05:13.521 sys 0m0.044s 00:05:13.521 13:13:10 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.521 13:13:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 END TEST rpc_client 00:05:13.521 ************************************ 00:05:13.521 13:13:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.521 13:13:10 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.521 13:13:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.521 13:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.521 13:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.779 ************************************ 00:05:13.779 START TEST json_config 00:05:13.779 ************************************ 00:05:13.779 13:13:10 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.779 13:13:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.779 13:13:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.780 13:13:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.780 13:13:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.780 13:13:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.780 13:13:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.780 13:13:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.780 13:13:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.780 13:13:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:13.780 13:13:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@47 -- # : 0 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:13.780 13:13:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:13.780 INFO: JSON configuration test init 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.780 13:13:11 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:13.780 13:13:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.780 13:13:11 json_config -- json_config/common.sh@10 -- # shift 00:05:13.780 13:13:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.780 13:13:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.780 13:13:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.780 13:13:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.780 13:13:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.780 13:13:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3443730 00:05:13.780 13:13:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:13.780 13:13:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.780 Waiting for target to run... 00:05:13.780 13:13:11 json_config -- json_config/common.sh@25 -- # waitforlisten 3443730 /var/tmp/spdk_tgt.sock 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@829 -- # '[' -z 3443730 ']' 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.780 13:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.780 [2024-07-12 13:13:11.105387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:13.780 [2024-07-12 13:13:11.105481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443730 ] 00:05:13.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.038 [2024-07-12 13:13:11.418950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.038 [2024-07-12 13:13:11.446991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.038 [2024-07-12 13:13:11.500760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:14.603 13:13:12 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.603 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.603 13:13:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:14.603 13:13:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:14.603 13:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.885 13:13:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.885 13:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:17.885 13:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.885 13:13:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:18.143 13:13:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.143 13:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:18.143 13:13:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.143 13:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:18.143 13:13:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.143 13:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.401 MallocForNvmf0 00:05:18.401 13:13:15 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.401 13:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.659 MallocForNvmf1 00:05:18.659 13:13:15 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.659 13:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.917 [2024-07-12 13:13:16.171571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.917 13:13:16 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.917 13:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.175 13:13:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.175 13:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.432 13:13:16 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.432 13:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.689 13:13:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.689 13:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.689 [2024-07-12 13:13:17.138776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.689 13:13:17 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:19.689 13:13:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.689 13:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.946 13:13:17 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:19.946 13:13:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.946 13:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.946 13:13:17 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:19.946 13:13:17 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.946 13:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.203 MallocBdevForConfigChangeCheck 00:05:20.203 13:13:17 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:20.203 13:13:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.203 13:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.203 13:13:17 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:20.203 13:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.461 13:13:17 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:20.461 INFO: shutting down applications... 00:05:20.461 13:13:17 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:20.461 13:13:17 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:20.461 13:13:17 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:20.461 13:13:17 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.359 Calling clear_iscsi_subsystem 00:05:22.359 Calling clear_nvmf_subsystem 00:05:22.359 Calling clear_nbd_subsystem 00:05:22.359 Calling clear_ublk_subsystem 00:05:22.359 Calling clear_vhost_blk_subsystem 00:05:22.359 Calling clear_vhost_scsi_subsystem 00:05:22.359 Calling clear_bdev_subsystem 00:05:22.359 13:13:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.359 13:13:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:22.359 13:13:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:22.359 13:13:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.359 13:13:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.360 13:13:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:22.360 13:13:19 json_config -- json_config/json_config.sh@345 -- # break 00:05:22.360 13:13:19 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:22.360 13:13:19 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:22.360 13:13:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:22.360 13:13:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.360 13:13:19 json_config -- json_config/common.sh@35 -- # [[ -n 3443730 ]] 00:05:22.360 13:13:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3443730 00:05:22.360 13:13:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.360 13:13:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.360 13:13:19 json_config -- json_config/common.sh@41 -- # kill -0 3443730 00:05:22.360 13:13:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.928 13:13:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.928 13:13:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.928 13:13:20 json_config -- json_config/common.sh@41 -- # kill -0 3443730 00:05:22.928 13:13:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.928 13:13:20 json_config -- json_config/common.sh@43 -- # break 00:05:22.928 13:13:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.928 13:13:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.928 SPDK target shutdown done 00:05:22.928 13:13:20 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:22.928 INFO: relaunching applications... 00:05:22.928 13:13:20 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.928 13:13:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.928 13:13:20 json_config -- json_config/common.sh@10 -- # shift 00:05:22.928 13:13:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.928 13:13:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.928 13:13:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.928 13:13:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.928 13:13:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.928 13:13:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3444921 00:05:22.928 13:13:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.928 13:13:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.928 Waiting for target to run... 00:05:22.928 13:13:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3444921 /var/tmp/spdk_tgt.sock 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@829 -- # '[' -z 3444921 ']' 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.928 13:13:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.928 [2024-07-12 13:13:20.378642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:22.928 [2024-07-12 13:13:20.378732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444921 ] 00:05:23.186 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.444 [2024-07-12 13:13:20.868347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.444 [2024-07-12 13:13:20.896066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.703 [2024-07-12 13:13:20.970808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.983 [2024-07-12 13:13:23.993221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.983 [2024-07-12 13:13:24.025705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.604 13:13:24 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.604 13:13:24 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.604 13:13:24 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.604 00:05:27.604 13:13:24 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:27.604 13:13:24 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:27.604 INFO: Checking if target configuration is the same... 00:05:27.604 13:13:24 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.604 13:13:24 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:27.604 13:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.604 + '[' 2 -ne 2 ']' 00:05:27.604 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.604 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.604 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.604 +++ basename /dev/fd/62 00:05:27.604 ++ mktemp /tmp/62.XXX 00:05:27.604 + tmp_file_1=/tmp/62.1Rd 00:05:27.604 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.604 + tmp_file_2=/tmp/spdk_tgt_config.json.nig 00:05:27.604 + ret=0 00:05:27.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.861 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.861 + diff -u /tmp/62.1Rd /tmp/spdk_tgt_config.json.nig 00:05:27.861 + echo 'INFO: JSON config files are the same' 00:05:27.861 INFO: JSON config files are the same 00:05:27.861 + rm /tmp/62.1Rd /tmp/spdk_tgt_config.json.nig 00:05:27.861 + exit 0 00:05:27.861 13:13:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:27.861 13:13:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.861 INFO: changing configuration and checking if this can be detected... 00:05:27.861 13:13:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.861 13:13:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.118 13:13:25 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.118 13:13:25 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:28.118 13:13:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.118 + '[' 2 -ne 2 ']' 00:05:28.118 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.118 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.118 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.118 +++ basename /dev/fd/62 00:05:28.118 ++ mktemp /tmp/62.XXX 00:05:28.118 + tmp_file_1=/tmp/62.I2X 00:05:28.118 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.118 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.118 + tmp_file_2=/tmp/spdk_tgt_config.json.W3P 00:05:28.118 + ret=0 00:05:28.118 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.422 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.680 + diff -u /tmp/62.I2X /tmp/spdk_tgt_config.json.W3P 00:05:28.680 + ret=1 00:05:28.680 + echo '=== Start of file: /tmp/62.I2X ===' 00:05:28.680 + cat /tmp/62.I2X 00:05:28.680 + echo '=== End of file: /tmp/62.I2X ===' 00:05:28.680 + echo '' 00:05:28.680 + echo '=== Start of file: /tmp/spdk_tgt_config.json.W3P ===' 00:05:28.680 + cat /tmp/spdk_tgt_config.json.W3P 00:05:28.680 + echo '=== End of file: /tmp/spdk_tgt_config.json.W3P ===' 00:05:28.680 + echo '' 00:05:28.680 + rm /tmp/62.I2X /tmp/spdk_tgt_config.json.W3P 00:05:28.680 + exit 1 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:28.680 INFO: configuration change detected. 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@317 -- # [[ -n 3444921 ]] 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.680 13:13:25 json_config -- json_config/json_config.sh@323 -- # killprocess 3444921 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@948 -- # '[' -z 3444921 ']' 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@952 -- # kill -0 3444921 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@953 -- # uname 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3444921 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3444921' 00:05:28.680 killing process with pid 3444921 00:05:28.680 13:13:25 json_config -- common/autotest_common.sh@967 -- # kill 3444921 00:05:28.681 13:13:25 json_config -- common/autotest_common.sh@972 -- # wait 3444921 00:05:30.053 13:13:27 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.053 13:13:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:30.053 13:13:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.053 13:13:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.053 13:13:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:30.053 13:13:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:30.053 INFO: Success 00:05:30.053 00:05:30.053 real 0m16.478s 00:05:30.053 user 0m18.344s 00:05:30.053 sys 0m2.024s 00:05:30.053 13:13:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.053 13:13:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.053 ************************************ 00:05:30.053 END TEST json_config 00:05:30.053 ************************************ 00:05:30.053 13:13:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.053 13:13:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.053 13:13:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.053 13:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.053 13:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:30.053 ************************************ 00:05:30.053 START TEST json_config_extra_key 00:05:30.053 ************************************ 00:05:30.053 13:13:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.311 13:13:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.311 13:13:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.311 13:13:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.311 13:13:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.311 13:13:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.311 13:13:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.311 13:13:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:30.311 13:13:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.311 13:13:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:30.311 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.312 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:30.312 INFO: launching applications... 00:05:30.312 13:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3445960 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.312 Waiting for target to run... 00:05:30.312 13:13:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3445960 /var/tmp/spdk_tgt.sock 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3445960 ']' 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.312 13:13:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.312 [2024-07-12 13:13:27.623042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:30.312 [2024-07-12 13:13:27.623125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445960 ] 00:05:30.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.879 [2024-07-12 13:13:28.111427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.879 [2024-07-12 13:13:28.139082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.879 [2024-07-12 13:13:28.212374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.137 13:13:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.137 13:13:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.137 00:05:31.137 13:13:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.137 INFO: shutting down applications... 00:05:31.137 13:13:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3445960 ]] 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3445960 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3445960 00:05:31.137 13:13:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3445960 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:31.702 13:13:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:31.702 SPDK target shutdown done 00:05:31.703 13:13:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:31.703 Success 00:05:31.703 00:05:31.703 real 0m1.555s 00:05:31.703 user 0m1.347s 00:05:31.703 sys 0m0.616s 00:05:31.703 13:13:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.703 13:13:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.703 ************************************ 00:05:31.703 END TEST json_config_extra_key 00:05:31.703 ************************************ 00:05:31.703 13:13:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.703 13:13:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.703 13:13:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.703 13:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.703 13:13:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.703 ************************************ 00:05:31.703 START TEST alias_rpc 00:05:31.703 ************************************ 00:05:31.703 13:13:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.961 * Looking for test storage... 00:05:31.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:31.961 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.961 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3446147 00:05:31.961 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.961 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3446147 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3446147 ']' 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.961 13:13:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.961 [2024-07-12 13:13:29.234096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:31.961 [2024-07-12 13:13:29.234176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446147 ] 00:05:31.961 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.961 [2024-07-12 13:13:29.264723] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.961 [2024-07-12 13:13:29.291269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.961 [2024-07-12 13:13:29.374785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.219 13:13:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.219 13:13:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.219 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:32.477 13:13:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3446147 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3446147 ']' 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3446147 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3446147 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3446147' 00:05:32.477 killing process with pid 3446147 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@967 -- # kill 3446147 00:05:32.477 13:13:29 alias_rpc -- common/autotest_common.sh@972 -- # wait 3446147 00:05:33.042 00:05:33.042 real 0m1.186s 00:05:33.042 user 0m1.257s 00:05:33.042 sys 0m0.410s 00:05:33.042 13:13:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.042 13:13:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 ************************************ 00:05:33.042 END TEST alias_rpc 00:05:33.042 ************************************ 00:05:33.042 13:13:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.042 13:13:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:33.042 13:13:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.042 13:13:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.042 13:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.042 13:13:30 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 ************************************ 00:05:33.042 START TEST spdkcli_tcp 00:05:33.042 ************************************ 00:05:33.042 13:13:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.043 * Looking for test storage... 00:05:33.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3446335 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.043 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3446335 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3446335 ']' 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.043 13:13:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 [2024-07-12 13:13:30.481367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:33.043 [2024-07-12 13:13:30.481465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446335 ] 00:05:33.043 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.043 [2024-07-12 13:13:30.513468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.301 [2024-07-12 13:13:30.540636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.301 [2024-07-12 13:13:30.628373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.301 [2024-07-12 13:13:30.628377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.559 13:13:30 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.559 13:13:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:33.559 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3446464 00:05:33.559 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.559 13:13:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.816 [ 00:05:33.816 "bdev_malloc_delete", 00:05:33.816 "bdev_malloc_create", 00:05:33.816 "bdev_null_resize", 00:05:33.816 "bdev_null_delete", 00:05:33.816 "bdev_null_create", 00:05:33.816 "bdev_nvme_cuse_unregister", 00:05:33.816 "bdev_nvme_cuse_register", 00:05:33.816 "bdev_opal_new_user", 00:05:33.816 "bdev_opal_set_lock_state", 00:05:33.816 "bdev_opal_delete", 00:05:33.816 "bdev_opal_get_info", 00:05:33.816 "bdev_opal_create", 00:05:33.816 "bdev_nvme_opal_revert", 00:05:33.816 "bdev_nvme_opal_init", 00:05:33.816 "bdev_nvme_send_cmd", 00:05:33.816 "bdev_nvme_get_path_iostat", 00:05:33.816 "bdev_nvme_get_mdns_discovery_info", 00:05:33.816 "bdev_nvme_stop_mdns_discovery", 00:05:33.816 "bdev_nvme_start_mdns_discovery", 00:05:33.816 "bdev_nvme_set_multipath_policy", 00:05:33.816 "bdev_nvme_set_preferred_path", 00:05:33.816 "bdev_nvme_get_io_paths", 00:05:33.816 "bdev_nvme_remove_error_injection", 00:05:33.816 "bdev_nvme_add_error_injection", 00:05:33.816 "bdev_nvme_get_discovery_info", 00:05:33.816 "bdev_nvme_stop_discovery", 00:05:33.816 "bdev_nvme_start_discovery", 00:05:33.816 "bdev_nvme_get_controller_health_info", 00:05:33.816 "bdev_nvme_disable_controller", 00:05:33.816 "bdev_nvme_enable_controller", 00:05:33.817 "bdev_nvme_reset_controller", 00:05:33.817 "bdev_nvme_get_transport_statistics", 00:05:33.817 "bdev_nvme_apply_firmware", 00:05:33.817 "bdev_nvme_detach_controller", 00:05:33.817 "bdev_nvme_get_controllers", 00:05:33.817 "bdev_nvme_attach_controller", 00:05:33.817 "bdev_nvme_set_hotplug", 00:05:33.817 "bdev_nvme_set_options", 00:05:33.817 "bdev_passthru_delete", 00:05:33.817 "bdev_passthru_create", 00:05:33.817 "bdev_lvol_set_parent_bdev", 00:05:33.817 "bdev_lvol_set_parent", 00:05:33.817 "bdev_lvol_check_shallow_copy", 00:05:33.817 "bdev_lvol_start_shallow_copy", 00:05:33.817 "bdev_lvol_grow_lvstore", 00:05:33.817 "bdev_lvol_get_lvols", 00:05:33.817 "bdev_lvol_get_lvstores", 00:05:33.817 "bdev_lvol_delete", 00:05:33.817 "bdev_lvol_set_read_only", 00:05:33.817 "bdev_lvol_resize", 00:05:33.817 "bdev_lvol_decouple_parent", 00:05:33.817 "bdev_lvol_inflate", 00:05:33.817 "bdev_lvol_rename", 00:05:33.817 "bdev_lvol_clone_bdev", 00:05:33.817 "bdev_lvol_clone", 00:05:33.817 "bdev_lvol_snapshot", 00:05:33.817 "bdev_lvol_create", 00:05:33.817 "bdev_lvol_delete_lvstore", 00:05:33.817 "bdev_lvol_rename_lvstore", 00:05:33.817 "bdev_lvol_create_lvstore", 00:05:33.817 "bdev_raid_set_options", 00:05:33.817 "bdev_raid_remove_base_bdev", 00:05:33.817 "bdev_raid_add_base_bdev", 00:05:33.817 "bdev_raid_delete", 00:05:33.817 "bdev_raid_create", 00:05:33.817 "bdev_raid_get_bdevs", 00:05:33.817 "bdev_error_inject_error", 00:05:33.817 "bdev_error_delete", 00:05:33.817 "bdev_error_create", 00:05:33.817 "bdev_split_delete", 00:05:33.817 "bdev_split_create", 00:05:33.817 "bdev_delay_delete", 00:05:33.817 "bdev_delay_create", 00:05:33.817 "bdev_delay_update_latency", 00:05:33.817 "bdev_zone_block_delete", 00:05:33.817 "bdev_zone_block_create", 00:05:33.817 "blobfs_create", 00:05:33.817 "blobfs_detect", 00:05:33.817 "blobfs_set_cache_size", 00:05:33.817 "bdev_aio_delete", 00:05:33.817 "bdev_aio_rescan", 00:05:33.817 "bdev_aio_create", 00:05:33.817 "bdev_ftl_set_property", 00:05:33.817 "bdev_ftl_get_properties", 00:05:33.817 "bdev_ftl_get_stats", 00:05:33.817 "bdev_ftl_unmap", 00:05:33.817 "bdev_ftl_unload", 00:05:33.817 "bdev_ftl_delete", 00:05:33.817 "bdev_ftl_load", 00:05:33.817 "bdev_ftl_create", 00:05:33.817 "bdev_virtio_attach_controller", 00:05:33.817 "bdev_virtio_scsi_get_devices", 00:05:33.817 "bdev_virtio_detach_controller", 00:05:33.817 "bdev_virtio_blk_set_hotplug", 00:05:33.817 "bdev_iscsi_delete", 00:05:33.817 "bdev_iscsi_create", 00:05:33.817 "bdev_iscsi_set_options", 00:05:33.817 "accel_error_inject_error", 00:05:33.817 "ioat_scan_accel_module", 00:05:33.817 "dsa_scan_accel_module", 00:05:33.817 "iaa_scan_accel_module", 00:05:33.817 "vfu_virtio_create_scsi_endpoint", 00:05:33.817 "vfu_virtio_scsi_remove_target", 00:05:33.817 "vfu_virtio_scsi_add_target", 00:05:33.817 "vfu_virtio_create_blk_endpoint", 00:05:33.817 "vfu_virtio_delete_endpoint", 00:05:33.817 "keyring_file_remove_key", 00:05:33.817 "keyring_file_add_key", 00:05:33.817 "keyring_linux_set_options", 00:05:33.817 "iscsi_get_histogram", 00:05:33.817 "iscsi_enable_histogram", 00:05:33.817 "iscsi_set_options", 00:05:33.817 "iscsi_get_auth_groups", 00:05:33.817 "iscsi_auth_group_remove_secret", 00:05:33.817 "iscsi_auth_group_add_secret", 00:05:33.817 "iscsi_delete_auth_group", 00:05:33.817 "iscsi_create_auth_group", 00:05:33.817 "iscsi_set_discovery_auth", 00:05:33.817 "iscsi_get_options", 00:05:33.817 "iscsi_target_node_request_logout", 00:05:33.817 "iscsi_target_node_set_redirect", 00:05:33.817 "iscsi_target_node_set_auth", 00:05:33.817 "iscsi_target_node_add_lun", 00:05:33.817 "iscsi_get_stats", 00:05:33.817 "iscsi_get_connections", 00:05:33.817 "iscsi_portal_group_set_auth", 00:05:33.817 "iscsi_start_portal_group", 00:05:33.817 "iscsi_delete_portal_group", 00:05:33.817 "iscsi_create_portal_group", 00:05:33.817 "iscsi_get_portal_groups", 00:05:33.817 "iscsi_delete_target_node", 00:05:33.817 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.817 "iscsi_target_node_add_pg_ig_maps", 00:05:33.817 "iscsi_create_target_node", 00:05:33.817 "iscsi_get_target_nodes", 00:05:33.817 "iscsi_delete_initiator_group", 00:05:33.817 "iscsi_initiator_group_remove_initiators", 00:05:33.817 "iscsi_initiator_group_add_initiators", 00:05:33.817 "iscsi_create_initiator_group", 00:05:33.817 "iscsi_get_initiator_groups", 00:05:33.817 "nvmf_set_crdt", 00:05:33.817 "nvmf_set_config", 00:05:33.817 "nvmf_set_max_subsystems", 00:05:33.817 "nvmf_stop_mdns_prr", 00:05:33.817 "nvmf_publish_mdns_prr", 00:05:33.817 "nvmf_subsystem_get_listeners", 00:05:33.817 "nvmf_subsystem_get_qpairs", 00:05:33.817 "nvmf_subsystem_get_controllers", 00:05:33.817 "nvmf_get_stats", 00:05:33.817 "nvmf_get_transports", 00:05:33.817 "nvmf_create_transport", 00:05:33.817 "nvmf_get_targets", 00:05:33.817 "nvmf_delete_target", 00:05:33.817 "nvmf_create_target", 00:05:33.817 "nvmf_subsystem_allow_any_host", 00:05:33.817 "nvmf_subsystem_remove_host", 00:05:33.817 "nvmf_subsystem_add_host", 00:05:33.817 "nvmf_ns_remove_host", 00:05:33.817 "nvmf_ns_add_host", 00:05:33.817 "nvmf_subsystem_remove_ns", 00:05:33.817 "nvmf_subsystem_add_ns", 00:05:33.817 "nvmf_subsystem_listener_set_ana_state", 00:05:33.817 "nvmf_discovery_get_referrals", 00:05:33.817 "nvmf_discovery_remove_referral", 00:05:33.817 "nvmf_discovery_add_referral", 00:05:33.817 "nvmf_subsystem_remove_listener", 00:05:33.817 "nvmf_subsystem_add_listener", 00:05:33.817 "nvmf_delete_subsystem", 00:05:33.817 "nvmf_create_subsystem", 00:05:33.817 "nvmf_get_subsystems", 00:05:33.817 "env_dpdk_get_mem_stats", 00:05:33.817 "nbd_get_disks", 00:05:33.817 "nbd_stop_disk", 00:05:33.817 "nbd_start_disk", 00:05:33.817 "ublk_recover_disk", 00:05:33.817 "ublk_get_disks", 00:05:33.817 "ublk_stop_disk", 00:05:33.817 "ublk_start_disk", 00:05:33.817 "ublk_destroy_target", 00:05:33.817 "ublk_create_target", 00:05:33.817 "virtio_blk_create_transport", 00:05:33.817 "virtio_blk_get_transports", 00:05:33.817 "vhost_controller_set_coalescing", 00:05:33.817 "vhost_get_controllers", 00:05:33.817 "vhost_delete_controller", 00:05:33.817 "vhost_create_blk_controller", 00:05:33.817 "vhost_scsi_controller_remove_target", 00:05:33.817 "vhost_scsi_controller_add_target", 00:05:33.817 "vhost_start_scsi_controller", 00:05:33.817 "vhost_create_scsi_controller", 00:05:33.817 "thread_set_cpumask", 00:05:33.817 "framework_get_governor", 00:05:33.817 "framework_get_scheduler", 00:05:33.817 "framework_set_scheduler", 00:05:33.817 "framework_get_reactors", 00:05:33.817 "thread_get_io_channels", 00:05:33.817 "thread_get_pollers", 00:05:33.817 "thread_get_stats", 00:05:33.817 "framework_monitor_context_switch", 00:05:33.817 "spdk_kill_instance", 00:05:33.817 "log_enable_timestamps", 00:05:33.817 "log_get_flags", 00:05:33.817 "log_clear_flag", 00:05:33.817 "log_set_flag", 00:05:33.817 "log_get_level", 00:05:33.817 "log_set_level", 00:05:33.817 "log_get_print_level", 00:05:33.817 "log_set_print_level", 00:05:33.817 "framework_enable_cpumask_locks", 00:05:33.817 "framework_disable_cpumask_locks", 00:05:33.817 "framework_wait_init", 00:05:33.817 "framework_start_init", 00:05:33.817 "scsi_get_devices", 00:05:33.817 "bdev_get_histogram", 00:05:33.817 "bdev_enable_histogram", 00:05:33.817 "bdev_set_qos_limit", 00:05:33.817 "bdev_set_qd_sampling_period", 00:05:33.817 "bdev_get_bdevs", 00:05:33.817 "bdev_reset_iostat", 00:05:33.817 "bdev_get_iostat", 00:05:33.817 "bdev_examine", 00:05:33.817 "bdev_wait_for_examine", 00:05:33.817 "bdev_set_options", 00:05:33.817 "notify_get_notifications", 00:05:33.817 "notify_get_types", 00:05:33.817 "accel_get_stats", 00:05:33.817 "accel_set_options", 00:05:33.817 "accel_set_driver", 00:05:33.817 "accel_crypto_key_destroy", 00:05:33.817 "accel_crypto_keys_get", 00:05:33.817 "accel_crypto_key_create", 00:05:33.817 "accel_assign_opc", 00:05:33.817 "accel_get_module_info", 00:05:33.817 "accel_get_opc_assignments", 00:05:33.817 "vmd_rescan", 00:05:33.817 "vmd_remove_device", 00:05:33.817 "vmd_enable", 00:05:33.817 "sock_get_default_impl", 00:05:33.817 "sock_set_default_impl", 00:05:33.817 "sock_impl_set_options", 00:05:33.817 "sock_impl_get_options", 00:05:33.817 "iobuf_get_stats", 00:05:33.817 "iobuf_set_options", 00:05:33.817 "keyring_get_keys", 00:05:33.817 "framework_get_pci_devices", 00:05:33.817 "framework_get_config", 00:05:33.817 "framework_get_subsystems", 00:05:33.817 "vfu_tgt_set_base_path", 00:05:33.817 "trace_get_info", 00:05:33.817 "trace_get_tpoint_group_mask", 00:05:33.817 "trace_disable_tpoint_group", 00:05:33.817 "trace_enable_tpoint_group", 00:05:33.817 "trace_clear_tpoint_mask", 00:05:33.817 "trace_set_tpoint_mask", 00:05:33.817 "spdk_get_version", 00:05:33.817 "rpc_get_methods" 00:05:33.817 ] 00:05:33.817 13:13:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.818 13:13:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.818 13:13:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3446335 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3446335 ']' 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3446335 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3446335 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3446335' 00:05:33.818 killing process with pid 3446335 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3446335 00:05:33.818 13:13:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3446335 00:05:34.383 00:05:34.383 real 0m1.196s 00:05:34.383 user 0m2.149s 00:05:34.383 sys 0m0.425s 00:05:34.383 13:13:31 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.383 13:13:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.383 ************************************ 00:05:34.383 END TEST spdkcli_tcp 00:05:34.383 ************************************ 00:05:34.383 13:13:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.383 13:13:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.383 13:13:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.383 13:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.383 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.383 ************************************ 00:05:34.383 START TEST dpdk_mem_utility 00:05:34.383 ************************************ 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.383 * Looking for test storage... 00:05:34.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.383 13:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.383 13:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3446539 00:05:34.383 13:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.383 13:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3446539 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3446539 ']' 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.383 13:13:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.383 [2024-07-12 13:13:31.713923] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:34.383 [2024-07-12 13:13:31.714011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446539 ] 00:05:34.383 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.383 [2024-07-12 13:13:31.746409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.383 [2024-07-12 13:13:31.774733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.641 [2024-07-12 13:13:31.863824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.641 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.641 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:34.641 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:34.641 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:34.641 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.641 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.641 { 00:05:34.641 "filename": "/tmp/spdk_mem_dump.txt" 00:05:34.641 } 00:05:34.641 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.641 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.900 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:34.900 1 heaps totaling size 814.000000 MiB 00:05:34.900 size: 814.000000 MiB heap id: 0 00:05:34.900 end heaps---------- 00:05:34.900 8 mempools totaling size 598.116089 MiB 00:05:34.900 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:34.900 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:34.900 size: 84.521057 MiB name: bdev_io_3446539 00:05:34.900 size: 51.011292 MiB name: evtpool_3446539 00:05:34.900 size: 50.003479 MiB name: msgpool_3446539 00:05:34.900 size: 21.763794 MiB name: PDU_Pool 00:05:34.900 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:34.900 size: 0.026123 MiB name: Session_Pool 00:05:34.900 end mempools------- 00:05:34.900 6 memzones totaling size 4.142822 MiB 00:05:34.900 size: 1.000366 MiB name: RG_ring_0_3446539 00:05:34.900 size: 1.000366 MiB name: RG_ring_1_3446539 00:05:34.900 size: 1.000366 MiB name: RG_ring_4_3446539 00:05:34.900 size: 1.000366 MiB name: RG_ring_5_3446539 00:05:34.900 size: 0.125366 MiB name: RG_ring_2_3446539 00:05:34.900 size: 0.015991 MiB name: RG_ring_3_3446539 00:05:34.900 end memzones------- 00:05:34.900 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:34.900 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:34.900 list of free elements. size: 12.519348 MiB 00:05:34.900 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:34.900 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:34.900 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:34.900 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:34.900 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:34.900 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:34.900 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:34.900 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:34.900 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:34.900 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:34.900 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:34.900 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:34.900 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:34.900 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:34.900 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:34.900 list of standard malloc elements. size: 199.218079 MiB 00:05:34.900 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:34.900 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:34.900 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:34.900 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:34.900 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:34.900 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:34.900 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:34.900 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:34.900 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:34.900 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:34.900 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:34.900 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:34.900 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:34.900 list of memzone associated elements. size: 602.262573 MiB 00:05:34.900 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:34.900 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:34.900 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:34.900 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:34.900 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:34.900 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3446539_0 00:05:34.900 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:34.900 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3446539_0 00:05:34.900 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:34.900 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3446539_0 00:05:34.900 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:34.900 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:34.900 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:34.900 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:34.900 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:34.900 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3446539 00:05:34.900 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:34.900 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3446539 00:05:34.900 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:34.900 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3446539 00:05:34.900 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:34.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:34.900 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:34.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:34.900 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:34.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:34.900 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:34.900 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:34.900 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:34.900 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3446539 00:05:34.900 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:34.900 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3446539 00:05:34.900 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:34.900 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3446539 00:05:34.900 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:34.900 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3446539 00:05:34.900 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:34.900 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3446539 00:05:34.900 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:34.900 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:34.900 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:34.900 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:34.900 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:34.900 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:34.900 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:34.900 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3446539 00:05:34.900 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:34.900 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:34.900 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:34.900 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:34.901 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:34.901 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3446539 00:05:34.901 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:34.901 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:34.901 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:34.901 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3446539 00:05:34.901 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:34.901 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3446539 00:05:34.901 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:34.901 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:34.901 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:34.901 13:13:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3446539 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3446539 ']' 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3446539 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3446539 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3446539' 00:05:34.901 killing process with pid 3446539 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3446539 00:05:34.901 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3446539 00:05:35.474 00:05:35.474 real 0m1.033s 00:05:35.474 user 0m1.020s 00:05:35.474 sys 0m0.381s 00:05:35.474 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.474 13:13:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.474 ************************************ 00:05:35.474 END TEST dpdk_mem_utility 00:05:35.474 ************************************ 00:05:35.474 13:13:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.474 13:13:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.474 13:13:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.474 13:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.474 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.474 ************************************ 00:05:35.474 START TEST event 00:05:35.474 ************************************ 00:05:35.474 13:13:32 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.474 * Looking for test storage... 00:05:35.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.474 13:13:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:35.474 13:13:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.474 13:13:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.474 13:13:32 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:35.474 13:13:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.474 13:13:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.474 ************************************ 00:05:35.474 START TEST event_perf 00:05:35.474 ************************************ 00:05:35.474 13:13:32 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.474 Running I/O for 1 seconds...[2024-07-12 13:13:32.787482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:35.474 [2024-07-12 13:13:32.787544] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446729 ] 00:05:35.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.474 [2024-07-12 13:13:32.819284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.474 [2024-07-12 13:13:32.845610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.474 [2024-07-12 13:13:32.932294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.474 [2024-07-12 13:13:32.932364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.475 [2024-07-12 13:13:32.932418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.475 [2024-07-12 13:13:32.932421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.845 Running I/O for 1 seconds... 00:05:36.845 lcore 0: 237848 00:05:36.845 lcore 1: 237848 00:05:36.845 lcore 2: 237848 00:05:36.845 lcore 3: 237848 00:05:36.845 done. 00:05:36.845 00:05:36.845 real 0m1.231s 00:05:36.845 user 0m4.149s 00:05:36.845 sys 0m0.078s 00:05:36.845 13:13:34 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.845 13:13:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.845 ************************************ 00:05:36.845 END TEST event_perf 00:05:36.845 ************************************ 00:05:36.845 13:13:34 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.845 13:13:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:36.845 13:13:34 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:36.845 13:13:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.845 13:13:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.845 ************************************ 00:05:36.845 START TEST event_reactor 00:05:36.845 ************************************ 00:05:36.845 13:13:34 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:36.845 [2024-07-12 13:13:34.064461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:36.845 [2024-07-12 13:13:34.064518] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446889 ] 00:05:36.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.846 [2024-07-12 13:13:34.098334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.846 [2024-07-12 13:13:34.124849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.846 [2024-07-12 13:13:34.209198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.214 test_start 00:05:38.214 oneshot 00:05:38.214 tick 100 00:05:38.214 tick 100 00:05:38.214 tick 250 00:05:38.214 tick 100 00:05:38.214 tick 100 00:05:38.214 tick 100 00:05:38.214 tick 250 00:05:38.214 tick 500 00:05:38.214 tick 100 00:05:38.214 tick 100 00:05:38.214 tick 250 00:05:38.214 tick 100 00:05:38.214 tick 100 00:05:38.214 test_end 00:05:38.214 00:05:38.214 real 0m1.232s 00:05:38.214 user 0m1.151s 00:05:38.214 sys 0m0.077s 00:05:38.214 13:13:35 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.214 13:13:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:38.215 ************************************ 00:05:38.215 END TEST event_reactor 00:05:38.215 ************************************ 00:05:38.215 13:13:35 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.215 13:13:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.215 13:13:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:38.215 13:13:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.215 13:13:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.215 ************************************ 00:05:38.215 START TEST event_reactor_perf 00:05:38.215 ************************************ 00:05:38.215 13:13:35 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.215 [2024-07-12 13:13:35.345111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:38.215 [2024-07-12 13:13:35.345180] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447164 ] 00:05:38.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.215 [2024-07-12 13:13:35.378387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.215 [2024-07-12 13:13:35.403261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.215 [2024-07-12 13:13:35.485675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.146 test_start 00:05:39.146 test_end 00:05:39.146 Performance: 450200 events per second 00:05:39.146 00:05:39.146 real 0m1.230s 00:05:39.146 user 0m1.151s 00:05:39.146 sys 0m0.075s 00:05:39.146 13:13:36 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.146 13:13:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 ************************************ 00:05:39.146 END TEST event_reactor_perf 00:05:39.146 ************************************ 00:05:39.146 13:13:36 event -- common/autotest_common.sh@1142 -- # return 0 00:05:39.146 13:13:36 event -- event/event.sh@49 -- # uname -s 00:05:39.147 13:13:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.147 13:13:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.147 13:13:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.147 13:13:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.147 13:13:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.147 ************************************ 00:05:39.147 START TEST event_scheduler 00:05:39.147 ************************************ 00:05:39.147 13:13:36 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.404 * Looking for test storage... 00:05:39.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:39.404 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.404 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3447344 00:05:39.404 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.404 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.404 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3447344 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3447344 ']' 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.404 13:13:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.404 [2024-07-12 13:13:36.714776] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:39.404 [2024-07-12 13:13:36.714845] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447344 ] 00:05:39.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.404 [2024-07-12 13:13:36.748386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.404 [2024-07-12 13:13:36.774738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.404 [2024-07-12 13:13:36.864351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.404 [2024-07-12 13:13:36.864375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.404 [2024-07-12 13:13:36.864436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.404 [2024-07-12 13:13:36.864440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.661 13:13:36 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.661 13:13:36 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:39.661 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:39.661 13:13:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.661 13:13:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.661 [2024-07-12 13:13:36.933363] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:39.662 [2024-07-12 13:13:36.933389] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:39.662 [2024-07-12 13:13:36.933406] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:39.662 [2024-07-12 13:13:36.933417] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:39.662 [2024-07-12 13:13:36.933427] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:39.662 13:13:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:39.662 13:13:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 [2024-07-12 13:13:37.028190] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.662 13:13:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.662 13:13:37 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.662 13:13:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 ************************************ 00:05:39.662 START TEST scheduler_create_thread 00:05:39.662 ************************************ 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 2 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 3 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 4 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 5 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 6 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 7 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 8 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 9 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.662 10 00:05:39.662 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.920 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.176 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.176 00:05:40.176 real 0m0.590s 00:05:40.176 user 0m0.014s 00:05:40.176 sys 0m0.000s 00:05:40.176 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.176 13:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.176 ************************************ 00:05:40.176 END TEST scheduler_create_thread 00:05:40.176 ************************************ 00:05:40.432 13:13:37 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:40.432 13:13:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.432 13:13:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3447344 00:05:40.432 13:13:37 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3447344 ']' 00:05:40.432 13:13:37 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3447344 00:05:40.432 13:13:37 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3447344 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3447344' 00:05:40.433 killing process with pid 3447344 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3447344 00:05:40.433 13:13:37 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3447344 00:05:40.689 [2024-07-12 13:13:38.128745] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.947 00:05:40.947 real 0m1.735s 00:05:40.947 user 0m2.270s 00:05:40.947 sys 0m0.323s 00:05:40.947 13:13:38 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.947 13:13:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.947 ************************************ 00:05:40.947 END TEST event_scheduler 00:05:40.947 ************************************ 00:05:40.947 13:13:38 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.947 13:13:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.947 13:13:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.947 13:13:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.947 13:13:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.947 13:13:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.947 ************************************ 00:05:40.947 START TEST app_repeat 00:05:40.947 ************************************ 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3447539 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3447539' 00:05:40.947 Process app_repeat pid: 3447539 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.947 spdk_app_start Round 0 00:05:40.947 13:13:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3447539 /var/tmp/spdk-nbd.sock 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3447539 ']' 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.947 13:13:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 [2024-07-12 13:13:38.423164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:41.205 [2024-07-12 13:13:38.423227] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447539 ] 00:05:41.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.205 [2024-07-12 13:13:38.456246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.205 [2024-07-12 13:13:38.483211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.205 [2024-07-12 13:13:38.571011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.205 [2024-07-12 13:13:38.571014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.205 13:13:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.205 13:13:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:41.205 13:13:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.462 Malloc0 00:05:41.462 13:13:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.720 Malloc1 00:05:41.720 13:13:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.720 13:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.977 /dev/nbd0 00:05:41.977 13:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.977 13:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.977 1+0 records in 00:05:41.977 1+0 records out 00:05:41.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209633 s, 19.5 MB/s 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.977 13:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.235 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.235 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.235 13:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.235 /dev/nbd1 00:05:42.235 13:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.235 13:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.235 13:13:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.527 1+0 records in 00:05:42.527 1+0 records out 00:05:42.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181965 s, 22.5 MB/s 00:05:42.527 13:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.527 13:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.527 13:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.527 13:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.527 13:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.527 { 00:05:42.527 "nbd_device": "/dev/nbd0", 00:05:42.527 "bdev_name": "Malloc0" 00:05:42.527 }, 00:05:42.527 { 00:05:42.527 "nbd_device": "/dev/nbd1", 00:05:42.527 "bdev_name": "Malloc1" 00:05:42.527 } 00:05:42.527 ]' 00:05:42.527 13:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.527 { 00:05:42.528 "nbd_device": "/dev/nbd0", 00:05:42.528 "bdev_name": "Malloc0" 00:05:42.528 }, 00:05:42.528 { 00:05:42.528 "nbd_device": "/dev/nbd1", 00:05:42.528 "bdev_name": "Malloc1" 00:05:42.528 } 00:05:42.528 ]' 00:05:42.528 13:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.786 /dev/nbd1' 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.786 /dev/nbd1' 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.786 256+0 records in 00:05:42.786 256+0 records out 00:05:42.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513452 s, 204 MB/s 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.786 256+0 records in 00:05:42.786 256+0 records out 00:05:42.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211265 s, 49.6 MB/s 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.786 256+0 records in 00:05:42.786 256+0 records out 00:05:42.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223491 s, 46.9 MB/s 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.786 13:13:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.787 13:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.044 13:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.301 13:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.557 13:13:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.557 13:13:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.815 13:13:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.073 [2024-07-12 13:13:41.391916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.073 [2024-07-12 13:13:41.472093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.073 [2024-07-12 13:13:41.472096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.073 [2024-07-12 13:13:41.530530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.073 [2024-07-12 13:13:41.530592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.352 13:13:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.352 13:13:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.352 spdk_app_start Round 1 00:05:47.352 13:13:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3447539 /var/tmp/spdk-nbd.sock 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3447539 ']' 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.352 13:13:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.352 13:13:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.352 Malloc0 00:05:47.352 13:13:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.609 Malloc1 00:05:47.609 13:13:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.609 13:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.867 /dev/nbd0 00:05:47.867 13:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.867 13:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.867 1+0 records in 00:05:47.867 1+0 records out 00:05:47.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145054 s, 28.2 MB/s 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.867 13:13:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:47.867 13:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.867 13:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.867 13:13:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.125 /dev/nbd1 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.125 1+0 records in 00:05:48.125 1+0 records out 00:05:48.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181398 s, 22.6 MB/s 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.125 13:13:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.125 13:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.383 { 00:05:48.383 "nbd_device": "/dev/nbd0", 00:05:48.383 "bdev_name": "Malloc0" 00:05:48.383 }, 00:05:48.383 { 00:05:48.383 "nbd_device": "/dev/nbd1", 00:05:48.383 "bdev_name": "Malloc1" 00:05:48.383 } 00:05:48.383 ]' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.383 { 00:05:48.383 "nbd_device": "/dev/nbd0", 00:05:48.383 "bdev_name": "Malloc0" 00:05:48.383 }, 00:05:48.383 { 00:05:48.383 "nbd_device": "/dev/nbd1", 00:05:48.383 "bdev_name": "Malloc1" 00:05:48.383 } 00:05:48.383 ]' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.383 /dev/nbd1' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.383 /dev/nbd1' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.383 256+0 records in 00:05:48.383 256+0 records out 00:05:48.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507456 s, 207 MB/s 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.383 256+0 records in 00:05:48.383 256+0 records out 00:05:48.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203588 s, 51.5 MB/s 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.383 13:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.640 256+0 records in 00:05:48.640 256+0 records out 00:05:48.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230091 s, 45.6 MB/s 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.640 13:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.641 13:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.898 13:13:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.155 13:13:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.413 13:13:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.413 13:13:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.670 13:13:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.928 [2024-07-12 13:13:47.180211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.928 [2024-07-12 13:13:47.262933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.928 [2024-07-12 13:13:47.262937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.928 [2024-07-12 13:13:47.315601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.928 [2024-07-12 13:13:47.315674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.206 13:13:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.206 13:13:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.206 spdk_app_start Round 2 00:05:53.206 13:13:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3447539 /var/tmp/spdk-nbd.sock 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3447539 ']' 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.206 13:13:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.206 13:13:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.206 13:13:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:53.207 13:13:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.207 Malloc0 00:05:53.207 13:13:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.465 Malloc1 00:05:53.465 13:13:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.465 13:13:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.723 /dev/nbd0 00:05:53.723 13:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.723 13:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.723 13:13:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.723 1+0 records in 00:05:53.723 1+0 records out 00:05:53.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015938 s, 25.7 MB/s 00:05:53.723 13:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.723 13:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.723 13:13:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.723 13:13:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.723 13:13:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.723 13:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.723 13:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.723 13:13:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.981 /dev/nbd1 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.981 1+0 records in 00:05:53.981 1+0 records out 00:05:53.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188657 s, 21.7 MB/s 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.981 13:13:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.981 13:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.240 { 00:05:54.240 "nbd_device": "/dev/nbd0", 00:05:54.240 "bdev_name": "Malloc0" 00:05:54.240 }, 00:05:54.240 { 00:05:54.240 "nbd_device": "/dev/nbd1", 00:05:54.240 "bdev_name": "Malloc1" 00:05:54.240 } 00:05:54.240 ]' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.240 { 00:05:54.240 "nbd_device": "/dev/nbd0", 00:05:54.240 "bdev_name": "Malloc0" 00:05:54.240 }, 00:05:54.240 { 00:05:54.240 "nbd_device": "/dev/nbd1", 00:05:54.240 "bdev_name": "Malloc1" 00:05:54.240 } 00:05:54.240 ]' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.240 /dev/nbd1' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.240 /dev/nbd1' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.240 256+0 records in 00:05:54.240 256+0 records out 00:05:54.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052514 s, 200 MB/s 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.240 256+0 records in 00:05:54.240 256+0 records out 00:05:54.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204248 s, 51.3 MB/s 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.240 256+0 records in 00:05:54.240 256+0 records out 00:05:54.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225204 s, 46.6 MB/s 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.240 13:13:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.498 13:13:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.756 13:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.014 13:13:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.014 13:13:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.578 13:13:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.578 [2024-07-12 13:13:52.962297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.578 [2024-07-12 13:13:53.041611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.578 [2024-07-12 13:13:53.041614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.836 [2024-07-12 13:13:53.101898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.836 [2024-07-12 13:13:53.101954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.360 13:13:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3447539 /var/tmp/spdk-nbd.sock 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3447539 ']' 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.360 13:13:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.617 13:13:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.617 13:13:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:58.617 13:13:55 event.app_repeat -- event/event.sh@39 -- # killprocess 3447539 00:05:58.617 13:13:55 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3447539 ']' 00:05:58.617 13:13:55 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3447539 00:05:58.617 13:13:55 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3447539 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3447539' 00:05:58.617 killing process with pid 3447539 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3447539 00:05:58.617 13:13:56 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3447539 00:05:58.883 spdk_app_start is called in Round 0. 00:05:58.883 Shutdown signal received, stop current app iteration 00:05:58.883 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 reinitialization... 00:05:58.883 spdk_app_start is called in Round 1. 00:05:58.883 Shutdown signal received, stop current app iteration 00:05:58.883 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 reinitialization... 00:05:58.883 spdk_app_start is called in Round 2. 00:05:58.883 Shutdown signal received, stop current app iteration 00:05:58.883 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 reinitialization... 00:05:58.883 spdk_app_start is called in Round 3. 00:05:58.883 Shutdown signal received, stop current app iteration 00:05:58.883 13:13:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.883 13:13:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.883 00:05:58.883 real 0m17.824s 00:05:58.883 user 0m38.918s 00:05:58.883 sys 0m3.214s 00:05:58.883 13:13:56 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.883 13:13:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.883 ************************************ 00:05:58.883 END TEST app_repeat 00:05:58.883 ************************************ 00:05:58.883 13:13:56 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.883 13:13:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.883 13:13:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.883 13:13:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.883 13:13:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.883 13:13:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.883 ************************************ 00:05:58.883 START TEST cpu_locks 00:05:58.883 ************************************ 00:05:58.883 13:13:56 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.883 * Looking for test storage... 00:05:58.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.883 13:13:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.883 13:13:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.883 13:13:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.883 13:13:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.883 13:13:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.883 13:13:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.883 13:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.177 ************************************ 00:05:59.177 START TEST default_locks 00:05:59.177 ************************************ 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3449886 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3449886 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3449886 ']' 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.177 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.177 [2024-07-12 13:13:56.408294] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:05:59.177 [2024-07-12 13:13:56.408412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449886 ] 00:05:59.177 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.177 [2024-07-12 13:13:56.439055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.177 [2024-07-12 13:13:56.466576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.177 [2024-07-12 13:13:56.549760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.434 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.434 13:13:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:59.434 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3449886 00:05:59.434 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3449886 00:05:59.434 13:13:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.998 lslocks: write error 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3449886 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3449886 ']' 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3449886 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3449886 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3449886' 00:05:59.998 killing process with pid 3449886 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3449886 00:05:59.998 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3449886 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3449886 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3449886 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3449886 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3449886 ']' 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3449886) - No such process 00:06:00.255 ERROR: process (pid: 3449886) is no longer running 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.255 00:06:00.255 real 0m1.229s 00:06:00.255 user 0m1.165s 00:06:00.255 sys 0m0.543s 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.255 13:13:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 ************************************ 00:06:00.255 END TEST default_locks 00:06:00.255 ************************************ 00:06:00.255 13:13:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.255 13:13:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.255 13:13:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.255 13:13:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.255 13:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 ************************************ 00:06:00.255 START TEST default_locks_via_rpc 00:06:00.255 ************************************ 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3450058 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3450058 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3450058 ']' 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.255 13:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 [2024-07-12 13:13:57.690232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:00.255 [2024-07-12 13:13:57.690312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450058 ] 00:06:00.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.255 [2024-07-12 13:13:57.721822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.513 [2024-07-12 13:13:57.747829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.513 [2024-07-12 13:13:57.835139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3450058 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3450058 00:06:00.771 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3450058 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3450058 ']' 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3450058 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3450058 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3450058' 00:06:01.029 killing process with pid 3450058 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3450058 00:06:01.029 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3450058 00:06:01.594 00:06:01.594 real 0m1.145s 00:06:01.594 user 0m1.074s 00:06:01.594 sys 0m0.501s 00:06:01.594 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.594 13:13:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.594 ************************************ 00:06:01.594 END TEST default_locks_via_rpc 00:06:01.594 ************************************ 00:06:01.594 13:13:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.594 13:13:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:01.594 13:13:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.594 13:13:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.594 13:13:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.594 ************************************ 00:06:01.594 START TEST non_locking_app_on_locked_coremask 00:06:01.594 ************************************ 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3450218 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3450218 /var/tmp/spdk.sock 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3450218 ']' 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.594 13:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.594 [2024-07-12 13:13:58.888073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:01.594 [2024-07-12 13:13:58.888170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450218 ] 00:06:01.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.594 [2024-07-12 13:13:58.920480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.594 [2024-07-12 13:13:58.948498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.594 [2024-07-12 13:13:59.034321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3450341 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3450341 /var/tmp/spdk2.sock 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3450341 ']' 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.850 13:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.850 [2024-07-12 13:13:59.314567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:01.850 [2024-07-12 13:13:59.314662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450341 ] 00:06:02.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.107 [2024-07-12 13:13:59.347024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.107 [2024-07-12 13:13:59.396968] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.107 [2024-07-12 13:13:59.397002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.107 [2024-07-12 13:13:59.563998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.040 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.040 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.040 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3450218 00:06:03.040 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3450218 00:06:03.040 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.297 lslocks: write error 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3450218 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3450218 ']' 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3450218 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3450218 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3450218' 00:06:03.297 killing process with pid 3450218 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3450218 00:06:03.297 13:14:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3450218 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3450341 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3450341 ']' 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3450341 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3450341 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3450341' 00:06:04.230 killing process with pid 3450341 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3450341 00:06:04.230 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3450341 00:06:04.490 00:06:04.490 real 0m3.090s 00:06:04.490 user 0m3.262s 00:06:04.490 sys 0m1.033s 00:06:04.490 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.490 13:14:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.490 ************************************ 00:06:04.490 END TEST non_locking_app_on_locked_coremask 00:06:04.490 ************************************ 00:06:04.490 13:14:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.490 13:14:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.490 13:14:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.490 13:14:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.490 13:14:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.749 ************************************ 00:06:04.749 START TEST locking_app_on_unlocked_coremask 00:06:04.749 ************************************ 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3450652 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3450652 /var/tmp/spdk.sock 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3450652 ']' 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.749 13:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.749 [2024-07-12 13:14:02.029469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:04.749 [2024-07-12 13:14:02.029571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450652 ] 00:06:04.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.750 [2024-07-12 13:14:02.060538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.750 [2024-07-12 13:14:02.086398] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.750 [2024-07-12 13:14:02.086423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.750 [2024-07-12 13:14:02.164627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3450772 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3450772 /var/tmp/spdk2.sock 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3450772 ']' 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.008 13:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.008 [2024-07-12 13:14:02.448897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:05.008 [2024-07-12 13:14:02.448986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450772 ] 00:06:05.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.266 [2024-07-12 13:14:02.482127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.266 [2024-07-12 13:14:02.532553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.266 [2024-07-12 13:14:02.698467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.196 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.196 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.196 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3450772 00:06:06.196 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3450772 00:06:06.196 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.454 lslocks: write error 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3450652 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3450652 ']' 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3450652 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3450652 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3450652' 00:06:06.454 killing process with pid 3450652 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3450652 00:06:06.454 13:14:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3450652 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3450772 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3450772 ']' 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3450772 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3450772 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3450772' 00:06:07.387 killing process with pid 3450772 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3450772 00:06:07.387 13:14:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3450772 00:06:07.646 00:06:07.646 real 0m3.044s 00:06:07.646 user 0m3.248s 00:06:07.646 sys 0m0.969s 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.646 ************************************ 00:06:07.646 END TEST locking_app_on_unlocked_coremask 00:06:07.646 ************************************ 00:06:07.646 13:14:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.646 13:14:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.646 13:14:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.646 13:14:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.646 13:14:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.646 ************************************ 00:06:07.646 START TEST locking_app_on_locked_coremask 00:06:07.646 ************************************ 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3451081 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.646 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3451081 /var/tmp/spdk.sock 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3451081 ']' 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.647 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.905 [2024-07-12 13:14:05.124938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:07.905 [2024-07-12 13:14:05.125043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451081 ] 00:06:07.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.905 [2024-07-12 13:14:05.155925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.905 [2024-07-12 13:14:05.182556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.905 [2024-07-12 13:14:05.265949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.162 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.162 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.162 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3451097 00:06:08.162 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.162 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3451097 /var/tmp/spdk2.sock 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3451097 /var/tmp/spdk2.sock 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3451097 /var/tmp/spdk2.sock 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3451097 ']' 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.163 13:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.163 [2024-07-12 13:14:05.554838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:08.163 [2024-07-12 13:14:05.554933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451097 ] 00:06:08.163 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.163 [2024-07-12 13:14:05.589014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.419 [2024-07-12 13:14:05.638942] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3451081 has claimed it. 00:06:08.419 [2024-07-12 13:14:05.638991] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3451097) - No such process 00:06:08.983 ERROR: process (pid: 3451097) is no longer running 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3451081 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3451081 00:06:08.983 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.240 lslocks: write error 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3451081 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3451081 ']' 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3451081 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3451081 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3451081' 00:06:09.240 killing process with pid 3451081 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3451081 00:06:09.240 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3451081 00:06:09.497 00:06:09.497 real 0m1.857s 00:06:09.497 user 0m2.041s 00:06:09.497 sys 0m0.576s 00:06:09.497 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.497 13:14:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.497 ************************************ 00:06:09.497 END TEST locking_app_on_locked_coremask 00:06:09.497 ************************************ 00:06:09.497 13:14:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:09.497 13:14:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.497 13:14:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.497 13:14:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.497 13:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.754 ************************************ 00:06:09.754 START TEST locking_overlapped_coremask 00:06:09.754 ************************************ 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3451374 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3451374 /var/tmp/spdk.sock 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3451374 ']' 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.754 13:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.754 [2024-07-12 13:14:07.035406] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:09.754 [2024-07-12 13:14:07.035501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451374 ] 00:06:09.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.754 [2024-07-12 13:14:07.068250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.754 [2024-07-12 13:14:07.094108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.754 [2024-07-12 13:14:07.181813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.754 [2024-07-12 13:14:07.181877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.754 [2024-07-12 13:14:07.181880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3451388 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3451388 /var/tmp/spdk2.sock 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3451388 /var/tmp/spdk2.sock 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3451388 /var/tmp/spdk2.sock 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3451388 ']' 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.012 13:14:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.012 [2024-07-12 13:14:07.479191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:10.012 [2024-07-12 13:14:07.479289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451388 ] 00:06:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.269 [2024-07-12 13:14:07.517337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.269 [2024-07-12 13:14:07.571822] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3451374 has claimed it. 00:06:10.269 [2024-07-12 13:14:07.571871] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3451388) - No such process 00:06:10.834 ERROR: process (pid: 3451388) is no longer running 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3451374 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3451374 ']' 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3451374 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3451374 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3451374' 00:06:10.834 killing process with pid 3451374 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3451374 00:06:10.834 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3451374 00:06:11.399 00:06:11.399 real 0m1.611s 00:06:11.399 user 0m4.383s 00:06:11.399 sys 0m0.446s 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.399 ************************************ 00:06:11.399 END TEST locking_overlapped_coremask 00:06:11.399 ************************************ 00:06:11.399 13:14:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.399 13:14:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.399 13:14:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.399 13:14:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.399 13:14:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.399 ************************************ 00:06:11.399 START TEST locking_overlapped_coremask_via_rpc 00:06:11.399 ************************************ 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3451558 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3451558 /var/tmp/spdk.sock 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3451558 ']' 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.399 13:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.399 [2024-07-12 13:14:08.693038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:11.400 [2024-07-12 13:14:08.693098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451558 ] 00:06:11.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.400 [2024-07-12 13:14:08.723468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.400 [2024-07-12 13:14:08.748158] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.400 [2024-07-12 13:14:08.748182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.400 [2024-07-12 13:14:08.828237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.400 [2024-07-12 13:14:08.828300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.400 [2024-07-12 13:14:08.828303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3451613 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3451613 /var/tmp/spdk2.sock 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3451613 ']' 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.658 13:14:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.916 [2024-07-12 13:14:09.132948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:11.916 [2024-07-12 13:14:09.133043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451613 ] 00:06:11.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.916 [2024-07-12 13:14:09.169343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.916 [2024-07-12 13:14:09.223782] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.916 [2024-07-12 13:14:09.223809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.173 [2024-07-12 13:14:09.399737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.173 [2024-07-12 13:14:09.399802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.173 [2024-07-12 13:14:09.399804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.769 [2024-07-12 13:14:10.092425] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3451558 has claimed it. 00:06:12.769 request: 00:06:12.769 { 00:06:12.769 "method": "framework_enable_cpumask_locks", 00:06:12.769 "req_id": 1 00:06:12.769 } 00:06:12.769 Got JSON-RPC error response 00:06:12.769 response: 00:06:12.769 { 00:06:12.769 "code": -32603, 00:06:12.769 "message": "Failed to claim CPU core: 2" 00:06:12.769 } 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3451558 /var/tmp/spdk.sock 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3451558 ']' 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.769 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3451613 /var/tmp/spdk2.sock 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3451613 ']' 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.027 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.285 00:06:13.285 real 0m1.972s 00:06:13.285 user 0m0.998s 00:06:13.285 sys 0m0.206s 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.285 13:14:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.285 ************************************ 00:06:13.285 END TEST locking_overlapped_coremask_via_rpc 00:06:13.285 ************************************ 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.285 13:14:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.285 13:14:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3451558 ]] 00:06:13.285 13:14:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3451558 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3451558 ']' 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3451558 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3451558 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3451558' 00:06:13.285 killing process with pid 3451558 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3451558 00:06:13.285 13:14:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3451558 00:06:13.850 13:14:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3451613 ]] 00:06:13.850 13:14:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3451613 00:06:13.850 13:14:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3451613 ']' 00:06:13.850 13:14:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3451613 00:06:13.850 13:14:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:13.850 13:14:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.850 13:14:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3451613 00:06:13.851 13:14:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:13.851 13:14:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:13.851 13:14:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3451613' 00:06:13.851 killing process with pid 3451613 00:06:13.851 13:14:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3451613 00:06:13.851 13:14:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3451613 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3451558 ]] 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3451558 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3451558 ']' 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3451558 00:06:14.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3451558) - No such process 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3451558 is not found' 00:06:14.110 Process with pid 3451558 is not found 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3451613 ]] 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3451613 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3451613 ']' 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3451613 00:06:14.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3451613) - No such process 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3451613 is not found' 00:06:14.110 Process with pid 3451613 is not found 00:06:14.110 13:14:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.110 00:06:14.110 real 0m15.238s 00:06:14.110 user 0m27.041s 00:06:14.110 sys 0m5.192s 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.110 13:14:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.110 ************************************ 00:06:14.110 END TEST cpu_locks 00:06:14.110 ************************************ 00:06:14.110 13:14:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.110 00:06:14.110 real 0m38.840s 00:06:14.110 user 1m14.833s 00:06:14.110 sys 0m9.176s 00:06:14.110 13:14:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.110 13:14:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.110 ************************************ 00:06:14.110 END TEST event 00:06:14.110 ************************************ 00:06:14.110 13:14:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.110 13:14:11 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.110 13:14:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.110 13:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.110 13:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:14.369 ************************************ 00:06:14.369 START TEST thread 00:06:14.369 ************************************ 00:06:14.369 13:14:11 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.369 * Looking for test storage... 00:06:14.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:14.369 13:14:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.369 13:14:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.369 13:14:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.369 13:14:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.369 ************************************ 00:06:14.369 START TEST thread_poller_perf 00:06:14.369 ************************************ 00:06:14.369 13:14:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.369 [2024-07-12 13:14:11.668799] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:14.369 [2024-07-12 13:14:11.668867] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452052 ] 00:06:14.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.369 [2024-07-12 13:14:11.701031] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.369 [2024-07-12 13:14:11.728269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.369 [2024-07-12 13:14:11.815726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.369 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.750 ====================================== 00:06:15.750 busy:2708451534 (cyc) 00:06:15.750 total_run_count: 362000 00:06:15.750 tsc_hz: 2700000000 (cyc) 00:06:15.750 ====================================== 00:06:15.750 poller_cost: 7481 (cyc), 2770 (nsec) 00:06:15.750 00:06:15.750 real 0m1.242s 00:06:15.750 user 0m1.165s 00:06:15.750 sys 0m0.072s 00:06:15.750 13:14:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.750 13:14:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.750 ************************************ 00:06:15.750 END TEST thread_poller_perf 00:06:15.750 ************************************ 00:06:15.750 13:14:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:15.750 13:14:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.750 13:14:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:15.750 13:14:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.750 13:14:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.750 ************************************ 00:06:15.750 START TEST thread_poller_perf 00:06:15.750 ************************************ 00:06:15.750 13:14:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.750 [2024-07-12 13:14:12.955066] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:15.750 [2024-07-12 13:14:12.955123] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452208 ] 00:06:15.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.750 [2024-07-12 13:14:12.987191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.750 [2024-07-12 13:14:13.012668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.750 [2024-07-12 13:14:13.099610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.750 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.122 ====================================== 00:06:17.122 busy:2702023659 (cyc) 00:06:17.122 total_run_count: 4845000 00:06:17.122 tsc_hz: 2700000000 (cyc) 00:06:17.122 ====================================== 00:06:17.122 poller_cost: 557 (cyc), 206 (nsec) 00:06:17.122 00:06:17.122 real 0m1.235s 00:06:17.122 user 0m1.153s 00:06:17.122 sys 0m0.076s 00:06:17.122 13:14:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.122 13:14:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 ************************************ 00:06:17.122 END TEST thread_poller_perf 00:06:17.122 ************************************ 00:06:17.122 13:14:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:17.122 13:14:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.122 00:06:17.122 real 0m2.619s 00:06:17.122 user 0m2.373s 00:06:17.122 sys 0m0.246s 00:06:17.122 13:14:14 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.122 13:14:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 ************************************ 00:06:17.122 END TEST thread 00:06:17.122 ************************************ 00:06:17.122 13:14:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.122 13:14:14 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.122 13:14:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.122 13:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.122 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 ************************************ 00:06:17.122 START TEST accel 00:06:17.122 ************************************ 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.122 * Looking for test storage... 00:06:17.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:17.122 13:14:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:17.122 13:14:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:17.122 13:14:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.122 13:14:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3452404 00:06:17.122 13:14:14 accel -- accel/accel.sh@63 -- # waitforlisten 3452404 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@829 -- # '[' -z 3452404 ']' 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.122 13:14:14 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.122 13:14:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.122 13:14:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.122 13:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 13:14:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.122 13:14:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.122 13:14:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.122 13:14:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.122 13:14:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:17.122 13:14:14 accel -- accel/accel.sh@41 -- # jq -r . 00:06:17.122 [2024-07-12 13:14:14.346789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:17.122 [2024-07-12 13:14:14.346882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452404 ] 00:06:17.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.122 [2024-07-12 13:14:14.381208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.122 [2024-07-12 13:14:14.408116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.122 [2024-07-12 13:14:14.496015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@862 -- # return 0 00:06:17.380 13:14:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:17.380 13:14:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:17.380 13:14:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:17.380 13:14:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:17.380 13:14:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:17.380 13:14:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:17.380 13:14:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.380 13:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.380 13:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.380 13:14:14 accel -- accel/accel.sh@75 -- # killprocess 3452404 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@948 -- # '[' -z 3452404 ']' 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@952 -- # kill -0 3452404 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@953 -- # uname 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3452404 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.380 13:14:14 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3452404' 00:06:17.380 killing process with pid 3452404 00:06:17.381 13:14:14 accel -- common/autotest_common.sh@967 -- # kill 3452404 00:06:17.381 13:14:14 accel -- common/autotest_common.sh@972 -- # wait 3452404 00:06:17.945 13:14:15 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:17.945 13:14:15 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:17.945 13:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.945 13:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.945 13:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.945 13:14:15 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:17.945 13:14:15 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:17.945 13:14:15 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:17.945 13:14:15 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.945 13:14:15 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.945 13:14:15 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.946 13:14:15 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.946 13:14:15 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.946 13:14:15 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:17.946 13:14:15 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:17.946 13:14:15 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.946 13:14:15 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:17.946 13:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.946 13:14:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.946 13:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.946 13:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.946 13:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.946 ************************************ 00:06:17.946 START TEST accel_missing_filename 00:06:17.946 ************************************ 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.946 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:17.946 13:14:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:17.946 [2024-07-12 13:14:15.320644] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:17.946 [2024-07-12 13:14:15.320711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452570 ] 00:06:17.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.946 [2024-07-12 13:14:15.352892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.946 [2024-07-12 13:14:15.380672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.203 [2024-07-12 13:14:15.471019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.203 [2024-07-12 13:14:15.529406] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.203 [2024-07-12 13:14:15.613146] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:18.461 A filename is required. 00:06:18.461 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.462 00:06:18.462 real 0m0.392s 00:06:18.462 user 0m0.289s 00:06:18.462 sys 0m0.137s 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.462 13:14:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:18.462 ************************************ 00:06:18.462 END TEST accel_missing_filename 00:06:18.462 ************************************ 00:06:18.462 13:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.462 13:14:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.462 13:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:18.462 13:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.462 13:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.462 ************************************ 00:06:18.462 START TEST accel_compress_verify 00:06:18.462 ************************************ 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.462 13:14:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:18.462 13:14:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:18.462 [2024-07-12 13:14:15.764028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:18.462 [2024-07-12 13:14:15.764113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452602 ] 00:06:18.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.462 [2024-07-12 13:14:15.798514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.462 [2024-07-12 13:14:15.826467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.462 [2024-07-12 13:14:15.909411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.721 [2024-07-12 13:14:15.966528] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.721 [2024-07-12 13:14:16.051274] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:18.721 00:06:18.721 Compression does not support the verify option, aborting. 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.721 00:06:18.721 real 0m0.388s 00:06:18.721 user 0m0.285s 00:06:18.721 sys 0m0.138s 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.721 13:14:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 ************************************ 00:06:18.721 END TEST accel_compress_verify 00:06:18.721 ************************************ 00:06:18.721 13:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.721 13:14:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.721 13:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.721 13:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.721 13:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 ************************************ 00:06:18.721 START TEST accel_wrong_workload 00:06:18.721 ************************************ 00:06:18.721 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:18.721 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.722 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:18.722 13:14:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:18.981 Unsupported workload type: foobar 00:06:18.981 [2024-07-12 13:14:16.196933] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.981 accel_perf options: 00:06:18.981 [-h help message] 00:06:18.981 [-q queue depth per core] 00:06:18.981 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.981 [-T number of threads per core 00:06:18.981 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.981 [-t time in seconds] 00:06:18.981 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.981 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.981 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.981 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.981 [-S for crc32c workload, use this seed value (default 0) 00:06:18.981 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.981 [-f for fill workload, use this BYTE value (default 255) 00:06:18.981 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.981 [-y verify result if this switch is on] 00:06:18.981 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.981 Can be used to spread operations across a wider range of memory. 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.981 00:06:18.981 real 0m0.022s 00:06:18.981 user 0m0.015s 00:06:18.981 sys 0m0.008s 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.981 13:14:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:18.981 ************************************ 00:06:18.981 END TEST accel_wrong_workload 00:06:18.981 ************************************ 00:06:18.981 Error: writing output failed: Broken pipe 00:06:18.981 13:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.981 13:14:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.981 13:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:18.981 13:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.981 13:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.981 ************************************ 00:06:18.981 START TEST accel_negative_buffers 00:06:18.981 ************************************ 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:18.981 13:14:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:18.981 -x option must be non-negative. 00:06:18.981 [2024-07-12 13:14:16.268392] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.981 accel_perf options: 00:06:18.981 [-h help message] 00:06:18.981 [-q queue depth per core] 00:06:18.981 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.981 [-T number of threads per core 00:06:18.981 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.981 [-t time in seconds] 00:06:18.981 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.981 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.981 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.981 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.981 [-S for crc32c workload, use this seed value (default 0) 00:06:18.981 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.981 [-f for fill workload, use this BYTE value (default 255) 00:06:18.981 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.981 [-y verify result if this switch is on] 00:06:18.981 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.981 Can be used to spread operations across a wider range of memory. 00:06:18.981 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:18.982 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.982 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.982 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.982 00:06:18.982 real 0m0.024s 00:06:18.982 user 0m0.013s 00:06:18.982 sys 0m0.011s 00:06:18.982 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.982 13:14:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:18.982 ************************************ 00:06:18.982 END TEST accel_negative_buffers 00:06:18.982 ************************************ 00:06:18.982 Error: writing output failed: Broken pipe 00:06:18.982 13:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.982 13:14:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:18.982 13:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.982 13:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.982 13:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.982 ************************************ 00:06:18.982 START TEST accel_crc32c 00:06:18.982 ************************************ 00:06:18.982 13:14:16 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.982 13:14:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.982 [2024-07-12 13:14:16.333772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:18.982 [2024-07-12 13:14:16.333835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452780 ] 00:06:18.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.982 [2024-07-12 13:14:16.365211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.982 [2024-07-12 13:14:16.390503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.240 [2024-07-12 13:14:16.476733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.240 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.241 13:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:20.614 13:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.614 00:06:20.614 real 0m1.385s 00:06:20.614 user 0m1.254s 00:06:20.614 sys 0m0.133s 00:06:20.614 13:14:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.614 13:14:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:20.614 ************************************ 00:06:20.614 END TEST accel_crc32c 00:06:20.614 ************************************ 00:06:20.614 13:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.614 13:14:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:20.614 13:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.614 13:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.614 13:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.614 ************************************ 00:06:20.614 START TEST accel_crc32c_C2 00:06:20.614 ************************************ 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.614 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:20.614 [2024-07-12 13:14:17.765196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:20.615 [2024-07-12 13:14:17.765259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452944 ] 00:06:20.615 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.615 [2024-07-12 13:14:17.797492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.615 [2024-07-12 13:14:17.822649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.615 [2024-07-12 13:14:17.907122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.615 13:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.989 00:06:21.989 real 0m1.385s 00:06:21.989 user 0m1.253s 00:06:21.989 sys 0m0.135s 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.989 13:14:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:21.989 ************************************ 00:06:21.989 END TEST accel_crc32c_C2 00:06:21.989 ************************************ 00:06:21.989 13:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.989 13:14:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.989 13:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.989 13:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.989 13:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.989 ************************************ 00:06:21.989 START TEST accel_copy 00:06:21.989 ************************************ 00:06:21.989 13:14:19 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.989 [2024-07-12 13:14:19.198712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:21.989 [2024-07-12 13:14:19.198775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453096 ] 00:06:21.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.989 [2024-07-12 13:14:19.231850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.989 [2024-07-12 13:14:19.257947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.989 [2024-07-12 13:14:19.344945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.989 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.990 13:14:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:23.364 13:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.364 00:06:23.364 real 0m1.387s 00:06:23.364 user 0m1.252s 00:06:23.364 sys 0m0.136s 00:06:23.364 13:14:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.364 13:14:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 ************************************ 00:06:23.364 END TEST accel_copy 00:06:23.364 ************************************ 00:06:23.364 13:14:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.364 13:14:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.364 13:14:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:23.364 13:14:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.364 13:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 ************************************ 00:06:23.364 START TEST accel_fill 00:06:23.364 ************************************ 00:06:23.364 13:14:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:23.364 [2024-07-12 13:14:20.632750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:23.364 [2024-07-12 13:14:20.632813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453364 ] 00:06:23.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.364 [2024-07-12 13:14:20.664691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.364 [2024-07-12 13:14:20.689432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.364 [2024-07-12 13:14:20.773779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.364 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:23.622 13:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:24.556 13:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.556 00:06:24.556 real 0m1.382s 00:06:24.556 user 0m1.247s 00:06:24.556 sys 0m0.138s 00:06:24.556 13:14:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.556 13:14:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:24.556 ************************************ 00:06:24.556 END TEST accel_fill 00:06:24.556 ************************************ 00:06:24.556 13:14:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.556 13:14:22 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:24.556 13:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.556 13:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.556 13:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.815 ************************************ 00:06:24.815 START TEST accel_copy_crc32c 00:06:24.815 ************************************ 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:24.815 [2024-07-12 13:14:22.063004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:24.815 [2024-07-12 13:14:22.063068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453525 ] 00:06:24.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.815 [2024-07-12 13:14:22.095489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.815 [2024-07-12 13:14:22.120435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.815 [2024-07-12 13:14:22.204233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.815 13:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.186 00:06:26.186 real 0m1.374s 00:06:26.186 user 0m1.244s 00:06:26.186 sys 0m0.133s 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.186 13:14:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 ************************************ 00:06:26.187 END TEST accel_copy_crc32c 00:06:26.187 ************************************ 00:06:26.187 13:14:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.187 13:14:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:26.187 13:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:26.187 13:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.187 13:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 ************************************ 00:06:26.187 START TEST accel_copy_crc32c_C2 00:06:26.187 ************************************ 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.187 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:26.187 [2024-07-12 13:14:23.485681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:26.187 [2024-07-12 13:14:23.485744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453684 ] 00:06:26.187 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.187 [2024-07-12 13:14:23.518393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.187 [2024-07-12 13:14:23.543054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.187 [2024-07-12 13:14:23.627390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.445 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.446 13:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.381 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.689 00:06:27.689 real 0m1.388s 00:06:27.689 user 0m1.254s 00:06:27.689 sys 0m0.137s 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.689 13:14:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:27.689 ************************************ 00:06:27.689 END TEST accel_copy_crc32c_C2 00:06:27.689 ************************************ 00:06:27.689 13:14:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.689 13:14:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:27.689 13:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.689 13:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.689 13:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.689 ************************************ 00:06:27.689 START TEST accel_dualcast 00:06:27.689 ************************************ 00:06:27.689 13:14:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:27.689 13:14:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:27.689 [2024-07-12 13:14:24.923755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:27.689 [2024-07-12 13:14:24.923820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453846 ] 00:06:27.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.689 [2024-07-12 13:14:24.955846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.689 [2024-07-12 13:14:24.983718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.689 [2024-07-12 13:14:25.069045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 13:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:29.064 13:14:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.064 00:06:29.064 real 0m1.386s 00:06:29.064 user 0m1.255s 00:06:29.064 sys 0m0.133s 00:06:29.064 13:14:26 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.064 13:14:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:29.064 ************************************ 00:06:29.064 END TEST accel_dualcast 00:06:29.064 ************************************ 00:06:29.064 13:14:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.064 13:14:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:29.064 13:14:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.064 13:14:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.064 13:14:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.064 ************************************ 00:06:29.064 START TEST accel_compare 00:06:29.064 ************************************ 00:06:29.064 13:14:26 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:29.064 13:14:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:29.064 [2024-07-12 13:14:26.359335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:29.064 [2024-07-12 13:14:26.359418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454110 ] 00:06:29.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.064 [2024-07-12 13:14:26.392010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.064 [2024-07-12 13:14:26.417083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.064 [2024-07-12 13:14:26.501961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.325 13:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.258 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.258 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.258 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.258 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.258 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:30.259 13:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.259 00:06:30.259 real 0m1.384s 00:06:30.259 user 0m1.239s 00:06:30.259 sys 0m0.147s 00:06:30.259 13:14:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.259 13:14:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:30.259 ************************************ 00:06:30.259 END TEST accel_compare 00:06:30.259 ************************************ 00:06:30.517 13:14:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.517 13:14:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:30.517 13:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.517 13:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.517 13:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.517 ************************************ 00:06:30.517 START TEST accel_xor 00:06:30.517 ************************************ 00:06:30.517 13:14:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:30.517 13:14:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:30.517 [2024-07-12 13:14:27.786185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:30.517 [2024-07-12 13:14:27.786248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454273 ] 00:06:30.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.517 [2024-07-12 13:14:27.819029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.518 [2024-07-12 13:14:27.844072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.518 [2024-07-12 13:14:27.929074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.518 13:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:31.889 13:14:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.889 00:06:31.889 real 0m1.378s 00:06:31.889 user 0m1.250s 00:06:31.889 sys 0m0.131s 00:06:31.889 13:14:29 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.889 13:14:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:31.889 ************************************ 00:06:31.889 END TEST accel_xor 00:06:31.889 ************************************ 00:06:31.889 13:14:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.889 13:14:29 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:31.889 13:14:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.889 13:14:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.889 13:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.890 ************************************ 00:06:31.890 START TEST accel_xor 00:06:31.890 ************************************ 00:06:31.890 13:14:29 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:31.890 13:14:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:31.890 [2024-07-12 13:14:29.214127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:31.890 [2024-07-12 13:14:29.214191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454429 ] 00:06:31.890 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.890 [2024-07-12 13:14:29.245710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.890 [2024-07-12 13:14:29.271266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.890 [2024-07-12 13:14:29.355545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.148 13:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.522 13:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.522 00:06:33.522 real 0m1.390s 00:06:33.522 user 0m1.257s 00:06:33.522 sys 0m0.134s 00:06:33.522 13:14:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.522 13:14:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 ************************************ 00:06:33.522 END TEST accel_xor 00:06:33.522 ************************************ 00:06:33.522 13:14:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.522 13:14:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:33.522 13:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:33.522 13:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.522 13:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 ************************************ 00:06:33.522 START TEST accel_dif_verify 00:06:33.522 ************************************ 00:06:33.522 13:14:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.522 13:14:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:33.523 [2024-07-12 13:14:30.652505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:33.523 [2024-07-12 13:14:30.652568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454607 ] 00:06:33.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.523 [2024-07-12 13:14:30.684887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.523 [2024-07-12 13:14:30.709928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.523 [2024-07-12 13:14:30.794039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.523 13:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.895 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:34.896 13:14:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.896 00:06:34.896 real 0m1.384s 00:06:34.896 user 0m1.245s 00:06:34.896 sys 0m0.143s 00:06:34.896 13:14:32 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.896 13:14:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.896 ************************************ 00:06:34.896 END TEST accel_dif_verify 00:06:34.896 ************************************ 00:06:34.896 13:14:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.896 13:14:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:34.896 13:14:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:34.896 13:14:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.896 13:14:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.896 ************************************ 00:06:34.896 START TEST accel_dif_generate 00:06:34.896 ************************************ 00:06:34.896 13:14:32 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:34.896 [2024-07-12 13:14:32.087204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:34.896 [2024-07-12 13:14:32.087266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454855 ] 00:06:34.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.896 [2024-07-12 13:14:32.119366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.896 [2024-07-12 13:14:32.143496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.896 [2024-07-12 13:14:32.227916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 13:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:36.267 13:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.267 00:06:36.267 real 0m1.383s 00:06:36.267 user 0m1.250s 00:06:36.267 sys 0m0.137s 00:06:36.267 13:14:33 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.267 13:14:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:36.267 ************************************ 00:06:36.267 END TEST accel_dif_generate 00:06:36.267 ************************************ 00:06:36.267 13:14:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.267 13:14:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:36.267 13:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.267 13:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.267 13:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.267 ************************************ 00:06:36.267 START TEST accel_dif_generate_copy 00:06:36.267 ************************************ 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.267 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:36.268 [2024-07-12 13:14:33.517607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:36.268 [2024-07-12 13:14:33.517678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455012 ] 00:06:36.268 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.268 [2024-07-12 13:14:33.549201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.268 [2024-07-12 13:14:33.575044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.268 [2024-07-12 13:14:33.658436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.268 13:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.636 00:06:37.636 real 0m1.368s 00:06:37.636 user 0m1.238s 00:06:37.636 sys 0m0.132s 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.636 13:14:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.636 ************************************ 00:06:37.636 END TEST accel_dif_generate_copy 00:06:37.636 ************************************ 00:06:37.636 13:14:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.636 13:14:34 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:37.636 13:14:34 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.636 13:14:34 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:37.636 13:14:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.636 13:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.636 ************************************ 00:06:37.636 START TEST accel_comp 00:06:37.636 ************************************ 00:06:37.636 13:14:34 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:37.636 13:14:34 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:37.636 [2024-07-12 13:14:34.936514] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:37.636 [2024-07-12 13:14:34.936572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455175 ] 00:06:37.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.636 [2024-07-12 13:14:34.969207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.636 [2024-07-12 13:14:34.993804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.636 [2024-07-12 13:14:35.078427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:37.894 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:37.895 13:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.264 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:39.265 13:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.265 00:06:39.265 real 0m1.389s 00:06:39.265 user 0m1.257s 00:06:39.265 sys 0m0.135s 00:06:39.265 13:14:36 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.265 13:14:36 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 ************************************ 00:06:39.265 END TEST accel_comp 00:06:39.265 ************************************ 00:06:39.265 13:14:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.265 13:14:36 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.265 13:14:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.265 13:14:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.265 13:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 ************************************ 00:06:39.265 START TEST accel_decomp 00:06:39.265 ************************************ 00:06:39.265 13:14:36 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:39.265 [2024-07-12 13:14:36.369229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:39.265 [2024-07-12 13:14:36.369293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455371 ] 00:06:39.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.265 [2024-07-12 13:14:36.400250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.265 [2024-07-12 13:14:36.425158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.265 [2024-07-12 13:14:36.509412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.265 13:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.637 13:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.637 00:06:40.637 real 0m1.383s 00:06:40.637 user 0m1.250s 00:06:40.637 sys 0m0.135s 00:06:40.637 13:14:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.637 13:14:37 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:40.637 ************************************ 00:06:40.637 END TEST accel_decomp 00:06:40.637 ************************************ 00:06:40.637 13:14:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.637 13:14:37 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.637 13:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:40.637 13:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.637 13:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.637 ************************************ 00:06:40.637 START TEST accel_decomp_full 00:06:40.637 ************************************ 00:06:40.637 13:14:37 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:40.637 13:14:37 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:40.637 [2024-07-12 13:14:37.799021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:40.637 [2024-07-12 13:14:37.799085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455594 ] 00:06:40.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.637 [2024-07-12 13:14:37.831554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.637 [2024-07-12 13:14:37.856202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.637 [2024-07-12 13:14:37.941243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.637 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.638 13:14:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.009 13:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.009 00:06:42.009 real 0m1.395s 00:06:42.009 user 0m1.262s 00:06:42.009 sys 0m0.136s 00:06:42.009 13:14:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.009 13:14:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:42.009 ************************************ 00:06:42.009 END TEST accel_decomp_full 00:06:42.009 ************************************ 00:06:42.009 13:14:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.009 13:14:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.009 13:14:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:42.009 13:14:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.009 13:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.009 ************************************ 00:06:42.010 START TEST accel_decomp_mcore 00:06:42.010 ************************************ 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.010 [2024-07-12 13:14:39.243807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:42.010 [2024-07-12 13:14:39.243869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455755 ] 00:06:42.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.010 [2024-07-12 13:14:39.276734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.010 [2024-07-12 13:14:39.301980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.010 [2024-07-12 13:14:39.392618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.010 [2024-07-12 13:14:39.396349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.010 [2024-07-12 13:14:39.396403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.010 [2024-07-12 13:14:39.396407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.010 13:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.385 00:06:43.385 real 0m1.405s 00:06:43.385 user 0m4.717s 00:06:43.385 sys 0m0.153s 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.385 13:14:40 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.385 ************************************ 00:06:43.385 END TEST accel_decomp_mcore 00:06:43.385 ************************************ 00:06:43.385 13:14:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.385 13:14:40 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.385 13:14:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:43.385 13:14:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.385 13:14:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.385 ************************************ 00:06:43.385 START TEST accel_decomp_full_mcore 00:06:43.385 ************************************ 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:43.385 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:43.385 [2024-07-12 13:14:40.698632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:43.385 [2024-07-12 13:14:40.698696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455917 ] 00:06:43.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.386 [2024-07-12 13:14:40.731798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.386 [2024-07-12 13:14:40.756937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.386 [2024-07-12 13:14:40.844154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.386 [2024-07-12 13:14:40.844259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.386 [2024-07-12 13:14:40.844343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.386 [2024-07-12 13:14:40.844347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.685 13:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.619 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.878 00:06:44.878 real 0m1.412s 00:06:44.878 user 0m4.752s 00:06:44.878 sys 0m0.153s 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.878 13:14:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:44.878 ************************************ 00:06:44.878 END TEST accel_decomp_full_mcore 00:06:44.878 ************************************ 00:06:44.878 13:14:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.878 13:14:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.878 13:14:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:44.878 13:14:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.878 13:14:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.878 ************************************ 00:06:44.878 START TEST accel_decomp_mthread 00:06:44.878 ************************************ 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:44.878 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:44.878 [2024-07-12 13:14:42.158506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:44.878 [2024-07-12 13:14:42.158569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456197 ] 00:06:44.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.878 [2024-07-12 13:14:42.190801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.878 [2024-07-12 13:14:42.215413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.878 [2024-07-12 13:14:42.298829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.136 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 13:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.073 00:06:46.073 real 0m1.390s 00:06:46.073 user 0m1.254s 00:06:46.073 sys 0m0.139s 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.073 13:14:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:46.073 ************************************ 00:06:46.073 END TEST accel_decomp_mthread 00:06:46.073 ************************************ 00:06:46.331 13:14:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.331 13:14:43 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.331 13:14:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.331 13:14:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.331 13:14:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.331 ************************************ 00:06:46.331 START TEST accel_decomp_full_mthread 00:06:46.331 ************************************ 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:46.331 [2024-07-12 13:14:43.600688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:46.331 [2024-07-12 13:14:43.600752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456348 ] 00:06:46.331 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.331 [2024-07-12 13:14:43.632159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.331 [2024-07-12 13:14:43.658041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.331 [2024-07-12 13:14:43.741851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.331 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 13:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.522 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.522 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.523 00:06:47.523 real 0m1.405s 00:06:47.523 user 0m1.274s 00:06:47.523 sys 0m0.134s 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.523 13:14:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:47.523 ************************************ 00:06:47.523 END TEST accel_decomp_full_mthread 00:06:47.523 ************************************ 00:06:47.781 13:14:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.781 13:14:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:47.781 13:14:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.781 13:14:45 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:47.781 13:14:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.781 13:14:45 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.781 13:14:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.781 13:14:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.781 13:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 13:14:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.781 13:14:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.781 13:14:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.781 13:14:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.781 13:14:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.781 ************************************ 00:06:47.781 START TEST accel_dif_functional_tests 00:06:47.781 ************************************ 00:06:47.781 13:14:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:47.781 [2024-07-12 13:14:45.074123] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:47.781 [2024-07-12 13:14:45.074184] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456513 ] 00:06:47.781 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.781 [2024-07-12 13:14:45.105447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.781 [2024-07-12 13:14:45.130066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.781 [2024-07-12 13:14:45.214202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.781 [2024-07-12 13:14:45.214311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.781 [2024-07-12 13:14:45.214313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.040 00:06:48.040 00:06:48.040 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.040 http://cunit.sourceforge.net/ 00:06:48.040 00:06:48.040 00:06:48.040 Suite: accel_dif 00:06:48.040 Test: verify: DIF generated, GUARD check ...passed 00:06:48.040 Test: verify: DIF generated, APPTAG check ...passed 00:06:48.040 Test: verify: DIF generated, REFTAG check ...passed 00:06:48.040 Test: verify: DIF not generated, GUARD check ...[2024-07-12 13:14:45.298343] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.040 passed 00:06:48.040 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 13:14:45.298421] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.040 passed 00:06:48.040 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 13:14:45.298455] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.040 passed 00:06:48.040 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:48.040 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 13:14:45.298518] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:48.040 passed 00:06:48.040 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:48.040 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:48.040 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:48.040 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 13:14:45.298682] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:48.040 passed 00:06:48.040 Test: verify copy: DIF generated, GUARD check ...passed 00:06:48.040 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:48.040 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:48.040 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 13:14:45.298825] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.040 passed 00:06:48.040 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 13:14:45.298859] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.040 passed 00:06:48.040 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 13:14:45.298890] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.040 passed 00:06:48.040 Test: generate copy: DIF generated, GUARD check ...passed 00:06:48.040 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:48.040 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:48.040 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:48.040 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:48.040 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:48.040 Test: generate copy: iovecs-len validate ...[2024-07-12 13:14:45.299102] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:48.040 passed 00:06:48.040 Test: generate copy: buffer alignment validate ...passed 00:06:48.040 00:06:48.040 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.040 suites 1 1 n/a 0 0 00:06:48.040 tests 26 26 26 0 0 00:06:48.040 asserts 115 115 115 0 n/a 00:06:48.040 00:06:48.040 Elapsed time = 0.002 seconds 00:06:48.040 00:06:48.040 real 0m0.471s 00:06:48.040 user 0m0.750s 00:06:48.040 sys 0m0.163s 00:06:48.040 13:14:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.040 13:14:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:48.040 ************************************ 00:06:48.040 END TEST accel_dif_functional_tests 00:06:48.040 ************************************ 00:06:48.298 13:14:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.298 00:06:48.298 real 0m31.281s 00:06:48.298 user 0m34.841s 00:06:48.298 sys 0m4.429s 00:06:48.298 13:14:45 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.299 13:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 ************************************ 00:06:48.299 END TEST accel 00:06:48.299 ************************************ 00:06:48.299 13:14:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.299 13:14:45 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.299 13:14:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.299 13:14:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.299 13:14:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 ************************************ 00:06:48.299 START TEST accel_rpc 00:06:48.299 ************************************ 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.299 * Looking for test storage... 00:06:48.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.299 13:14:45 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.299 13:14:45 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3456694 00:06:48.299 13:14:45 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:48.299 13:14:45 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3456694 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3456694 ']' 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.299 13:14:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 [2024-07-12 13:14:45.679232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:48.299 [2024-07-12 13:14:45.679338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456694 ] 00:06:48.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.299 [2024-07-12 13:14:45.710407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.299 [2024-07-12 13:14:45.736227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.558 [2024-07-12 13:14:45.821940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.558 13:14:45 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.558 13:14:45 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:48.558 13:14:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:48.558 13:14:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:48.558 13:14:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:48.558 13:14:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:48.558 13:14:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:48.558 13:14:45 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.558 13:14:45 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.558 13:14:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 ************************************ 00:06:48.558 START TEST accel_assign_opcode 00:06:48.558 ************************************ 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 [2024-07-12 13:14:45.906554] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 [2024-07-12 13:14:45.914564] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.558 13:14:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.816 software 00:06:48.816 00:06:48.816 real 0m0.279s 00:06:48.816 user 0m0.038s 00:06:48.816 sys 0m0.008s 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.816 13:14:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.816 ************************************ 00:06:48.816 END TEST accel_assign_opcode 00:06:48.816 ************************************ 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:48.816 13:14:46 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3456694 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3456694 ']' 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3456694 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3456694 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3456694' 00:06:48.816 killing process with pid 3456694 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@967 -- # kill 3456694 00:06:48.816 13:14:46 accel_rpc -- common/autotest_common.sh@972 -- # wait 3456694 00:06:49.382 00:06:49.382 real 0m1.062s 00:06:49.382 user 0m1.003s 00:06:49.382 sys 0m0.409s 00:06:49.382 13:14:46 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.382 13:14:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.382 ************************************ 00:06:49.382 END TEST accel_rpc 00:06:49.382 ************************************ 00:06:49.382 13:14:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.382 13:14:46 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:49.382 13:14:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.382 13:14:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.382 13:14:46 -- common/autotest_common.sh@10 -- # set +x 00:06:49.382 ************************************ 00:06:49.382 START TEST app_cmdline 00:06:49.382 ************************************ 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:49.382 * Looking for test storage... 00:06:49.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.382 13:14:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.382 13:14:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3456898 00:06:49.382 13:14:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.382 13:14:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3456898 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3456898 ']' 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.382 13:14:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.382 [2024-07-12 13:14:46.790176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:06:49.382 [2024-07-12 13:14:46.790265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456898 ] 00:06:49.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.382 [2024-07-12 13:14:46.821668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.382 [2024-07-12 13:14:46.846908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.640 [2024-07-12 13:14:46.931413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.898 13:14:47 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.898 13:14:47 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:49.898 13:14:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:50.156 { 00:06:50.156 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:50.156 "fields": { 00:06:50.156 "major": 24, 00:06:50.156 "minor": 9, 00:06:50.156 "patch": 0, 00:06:50.156 "suffix": "-pre", 00:06:50.156 "commit": "719d03c6a" 00:06:50.156 } 00:06:50.156 } 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:50.156 13:14:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:50.156 13:14:47 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:50.414 request: 00:06:50.414 { 00:06:50.414 "method": "env_dpdk_get_mem_stats", 00:06:50.414 "req_id": 1 00:06:50.414 } 00:06:50.414 Got JSON-RPC error response 00:06:50.414 response: 00:06:50.414 { 00:06:50.414 "code": -32601, 00:06:50.414 "message": "Method not found" 00:06:50.414 } 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.414 13:14:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3456898 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3456898 ']' 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3456898 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3456898 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3456898' 00:06:50.414 killing process with pid 3456898 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@967 -- # kill 3456898 00:06:50.414 13:14:47 app_cmdline -- common/autotest_common.sh@972 -- # wait 3456898 00:06:50.673 00:06:50.673 real 0m1.434s 00:06:50.673 user 0m1.768s 00:06:50.673 sys 0m0.445s 00:06:50.673 13:14:48 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.673 13:14:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.673 ************************************ 00:06:50.673 END TEST app_cmdline 00:06:50.673 ************************************ 00:06:50.933 13:14:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.933 13:14:48 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:50.933 13:14:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.933 13:14:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.933 13:14:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.933 ************************************ 00:06:50.933 START TEST version 00:06:50.933 ************************************ 00:06:50.933 13:14:48 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:50.933 * Looking for test storage... 00:06:50.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.933 13:14:48 version -- app/version.sh@17 -- # get_header_version major 00:06:50.933 13:14:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # cut -f2 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.933 13:14:48 version -- app/version.sh@17 -- # major=24 00:06:50.933 13:14:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:50.933 13:14:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # cut -f2 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.933 13:14:48 version -- app/version.sh@18 -- # minor=9 00:06:50.933 13:14:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.933 13:14:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # cut -f2 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.933 13:14:48 version -- app/version.sh@19 -- # patch=0 00:06:50.933 13:14:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.933 13:14:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # cut -f2 00:06:50.933 13:14:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.933 13:14:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.933 13:14:48 version -- app/version.sh@22 -- # version=24.9 00:06:50.933 13:14:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.933 13:14:48 version -- app/version.sh@28 -- # version=24.9rc0 00:06:50.933 13:14:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:50.933 13:14:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.933 13:14:48 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:50.933 13:14:48 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:50.933 00:06:50.933 real 0m0.115s 00:06:50.933 user 0m0.060s 00:06:50.933 sys 0m0.077s 00:06:50.933 13:14:48 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.933 13:14:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.933 ************************************ 00:06:50.933 END TEST version 00:06:50.933 ************************************ 00:06:50.933 13:14:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.933 13:14:48 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@198 -- # uname -s 00:06:50.933 13:14:48 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:50.933 13:14:48 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.933 13:14:48 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.933 13:14:48 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:50.933 13:14:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.933 13:14:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.933 13:14:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:50.933 13:14:48 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:50.933 13:14:48 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:50.934 13:14:48 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.934 13:14:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.934 13:14:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.934 13:14:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.934 ************************************ 00:06:50.934 START TEST nvmf_tcp 00:06:50.934 ************************************ 00:06:50.934 13:14:48 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.934 * Looking for test storage... 00:06:51.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.193 13:14:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.193 13:14:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.193 13:14:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.193 13:14:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:51.193 13:14:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:51.193 13:14:48 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.193 13:14:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:51.193 13:14:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:51.193 13:14:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.193 13:14:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.193 13:14:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.193 ************************************ 00:06:51.193 START TEST nvmf_example 00:06:51.193 ************************************ 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:51.193 * Looking for test storage... 00:06:51.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.193 13:14:48 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.194 13:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:53.098 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:53.098 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:53.098 Found net devices under 0000:09:00.0: cvl_0_0 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:53.098 Found net devices under 0000:09:00.1: cvl_0_1 00:06:53.098 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.099 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:06:53.357 00:06:53.357 --- 10.0.0.2 ping statistics --- 00:06:53.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.357 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:06:53.357 00:06:53.357 --- 10.0.0.1 ping statistics --- 00:06:53.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.357 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3458801 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3458801 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3458801 ']' 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.357 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.615 13:14:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:53.615 13:14:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:53.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.812 Initializing NVMe Controllers 00:07:05.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:05.812 Initialization complete. Launching workers. 00:07:05.812 ======================================================== 00:07:05.812 Latency(us) 00:07:05.812 Device Information : IOPS MiB/s Average min max 00:07:05.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15260.10 59.61 4195.14 885.17 17035.46 00:07:05.812 ======================================================== 00:07:05.812 Total : 15260.10 59.61 4195.14 885.17 17035.46 00:07:05.812 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:05.812 rmmod nvme_tcp 00:07:05.812 rmmod nvme_fabrics 00:07:05.812 rmmod nvme_keyring 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3458801 ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3458801 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3458801 ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3458801 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3458801 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3458801' 00:07:05.812 killing process with pid 3458801 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3458801 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3458801 00:07:05.812 nvmf threads initialize successfully 00:07:05.812 bdev subsystem init successfully 00:07:05.812 created a nvmf target service 00:07:05.812 create targets's poll groups done 00:07:05.812 all subsystems of target started 00:07:05.812 nvmf target is running 00:07:05.812 all subsystems of target stopped 00:07:05.812 destroy targets's poll groups done 00:07:05.812 destroyed the nvmf target service 00:07:05.812 bdev subsystem finish successfully 00:07:05.812 nvmf threads destroy successfully 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.812 13:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.420 00:07:06.420 real 0m15.148s 00:07:06.420 user 0m41.929s 00:07:06.420 sys 0m3.186s 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.420 13:15:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.420 ************************************ 00:07:06.420 END TEST nvmf_example 00:07:06.420 ************************************ 00:07:06.420 13:15:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:06.420 13:15:03 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:06.420 13:15:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:06.420 13:15:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.420 13:15:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.420 ************************************ 00:07:06.420 START TEST nvmf_filesystem 00:07:06.420 ************************************ 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:06.420 * Looking for test storage... 00:07:06.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:06.420 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:06.421 #define SPDK_CONFIG_H 00:07:06.421 #define SPDK_CONFIG_APPS 1 00:07:06.421 #define SPDK_CONFIG_ARCH native 00:07:06.421 #undef SPDK_CONFIG_ASAN 00:07:06.421 #undef SPDK_CONFIG_AVAHI 00:07:06.421 #undef SPDK_CONFIG_CET 00:07:06.421 #define SPDK_CONFIG_COVERAGE 1 00:07:06.421 #define SPDK_CONFIG_CROSS_PREFIX 00:07:06.421 #undef SPDK_CONFIG_CRYPTO 00:07:06.421 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:06.421 #undef SPDK_CONFIG_CUSTOMOCF 00:07:06.421 #undef SPDK_CONFIG_DAOS 00:07:06.421 #define SPDK_CONFIG_DAOS_DIR 00:07:06.421 #define SPDK_CONFIG_DEBUG 1 00:07:06.421 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:06.421 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:06.421 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:06.421 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:06.421 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:06.421 #undef SPDK_CONFIG_DPDK_UADK 00:07:06.421 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:06.421 #define SPDK_CONFIG_EXAMPLES 1 00:07:06.421 #undef SPDK_CONFIG_FC 00:07:06.421 #define SPDK_CONFIG_FC_PATH 00:07:06.421 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:06.421 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:06.421 #undef SPDK_CONFIG_FUSE 00:07:06.421 #undef SPDK_CONFIG_FUZZER 00:07:06.421 #define SPDK_CONFIG_FUZZER_LIB 00:07:06.421 #undef SPDK_CONFIG_GOLANG 00:07:06.421 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:06.421 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:06.421 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:06.421 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:06.421 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:06.421 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:06.421 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:06.421 #define SPDK_CONFIG_IDXD 1 00:07:06.421 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:06.421 #undef SPDK_CONFIG_IPSEC_MB 00:07:06.421 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:06.421 #define SPDK_CONFIG_ISAL 1 00:07:06.421 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:06.421 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:06.421 #define SPDK_CONFIG_LIBDIR 00:07:06.421 #undef SPDK_CONFIG_LTO 00:07:06.421 #define SPDK_CONFIG_MAX_LCORES 128 00:07:06.421 #define SPDK_CONFIG_NVME_CUSE 1 00:07:06.421 #undef SPDK_CONFIG_OCF 00:07:06.421 #define SPDK_CONFIG_OCF_PATH 00:07:06.421 #define SPDK_CONFIG_OPENSSL_PATH 00:07:06.421 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:06.421 #define SPDK_CONFIG_PGO_DIR 00:07:06.421 #undef SPDK_CONFIG_PGO_USE 00:07:06.421 #define SPDK_CONFIG_PREFIX /usr/local 00:07:06.421 #undef SPDK_CONFIG_RAID5F 00:07:06.421 #undef SPDK_CONFIG_RBD 00:07:06.421 #define SPDK_CONFIG_RDMA 1 00:07:06.421 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:06.421 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:06.421 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:06.421 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:06.421 #define SPDK_CONFIG_SHARED 1 00:07:06.421 #undef SPDK_CONFIG_SMA 00:07:06.421 #define SPDK_CONFIG_TESTS 1 00:07:06.421 #undef SPDK_CONFIG_TSAN 00:07:06.421 #define SPDK_CONFIG_UBLK 1 00:07:06.421 #define SPDK_CONFIG_UBSAN 1 00:07:06.421 #undef SPDK_CONFIG_UNIT_TESTS 00:07:06.421 #undef SPDK_CONFIG_URING 00:07:06.421 #define SPDK_CONFIG_URING_PATH 00:07:06.421 #undef SPDK_CONFIG_URING_ZNS 00:07:06.421 #undef SPDK_CONFIG_USDT 00:07:06.421 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:06.421 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:06.421 #define SPDK_CONFIG_VFIO_USER 1 00:07:06.421 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:06.421 #define SPDK_CONFIG_VHOST 1 00:07:06.421 #define SPDK_CONFIG_VIRTIO 1 00:07:06.421 #undef SPDK_CONFIG_VTUNE 00:07:06.421 #define SPDK_CONFIG_VTUNE_DIR 00:07:06.421 #define SPDK_CONFIG_WERROR 1 00:07:06.421 #define SPDK_CONFIG_WPDK_DIR 00:07:06.421 #undef SPDK_CONFIG_XNVME 00:07:06.421 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:06.421 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.422 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3460615 ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3460615 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.WWNqH9 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WWNqH9/tests/target /tmp/spdk.WWNqH9 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=49733943296 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994737664 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12260794368 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941732864 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:06.423 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8765440 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996320256 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1048576 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:06.424 * Looking for test storage... 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=49733943296 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=14475386880 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:06.424 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.425 13:15:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:08.956 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:08.956 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.956 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:08.957 Found net devices under 0000:09:00.0: cvl_0_0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:08.957 Found net devices under 0000:09:00.1: cvl_0_1 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:08.957 00:07:08.957 --- 10.0.0.2 ping statistics --- 00:07:08.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.957 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:08.957 00:07:08.957 --- 10.0.0.1 ping statistics --- 00:07:08.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.957 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.957 13:15:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.957 ************************************ 00:07:08.957 START TEST nvmf_filesystem_no_in_capsule 00:07:08.957 ************************************ 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3462242 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3462242 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3462242 ']' 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.957 [2024-07-12 13:15:06.059056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:07:08.957 [2024-07-12 13:15:06.059143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.957 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.957 [2024-07-12 13:15:06.097987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.957 [2024-07-12 13:15:06.124142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.957 [2024-07-12 13:15:06.216931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.957 [2024-07-12 13:15:06.216979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.957 [2024-07-12 13:15:06.217007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.957 [2024-07-12 13:15:06.217018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.957 [2024-07-12 13:15:06.217028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.957 [2024-07-12 13:15:06.217110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.957 [2024-07-12 13:15:06.217176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.957 [2024-07-12 13:15:06.217243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.957 [2024-07-12 13:15:06.217246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.957 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.958 [2024-07-12 13:15:06.374086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.958 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 Malloc1 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 [2024-07-12 13:15:06.569690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.215 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:09.215 { 00:07:09.215 "name": "Malloc1", 00:07:09.215 "aliases": [ 00:07:09.215 "28fd9bbc-cef9-4e84-9b4d-8fe0cf4e8326" 00:07:09.215 ], 00:07:09.215 "product_name": "Malloc disk", 00:07:09.215 "block_size": 512, 00:07:09.215 "num_blocks": 1048576, 00:07:09.215 "uuid": "28fd9bbc-cef9-4e84-9b4d-8fe0cf4e8326", 00:07:09.215 "assigned_rate_limits": { 00:07:09.215 "rw_ios_per_sec": 0, 00:07:09.215 "rw_mbytes_per_sec": 0, 00:07:09.215 "r_mbytes_per_sec": 0, 00:07:09.215 "w_mbytes_per_sec": 0 00:07:09.215 }, 00:07:09.215 "claimed": true, 00:07:09.215 "claim_type": "exclusive_write", 00:07:09.215 "zoned": false, 00:07:09.215 "supported_io_types": { 00:07:09.215 "read": true, 00:07:09.215 "write": true, 00:07:09.215 "unmap": true, 00:07:09.215 "flush": true, 00:07:09.215 "reset": true, 00:07:09.215 "nvme_admin": false, 00:07:09.215 "nvme_io": false, 00:07:09.215 "nvme_io_md": false, 00:07:09.215 "write_zeroes": true, 00:07:09.215 "zcopy": true, 00:07:09.215 "get_zone_info": false, 00:07:09.215 "zone_management": false, 00:07:09.215 "zone_append": false, 00:07:09.215 "compare": false, 00:07:09.215 "compare_and_write": false, 00:07:09.215 "abort": true, 00:07:09.215 "seek_hole": false, 00:07:09.216 "seek_data": false, 00:07:09.216 "copy": true, 00:07:09.216 "nvme_iov_md": false 00:07:09.216 }, 00:07:09.216 "memory_domains": [ 00:07:09.216 { 00:07:09.216 "dma_device_id": "system", 00:07:09.216 "dma_device_type": 1 00:07:09.216 }, 00:07:09.216 { 00:07:09.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.216 "dma_device_type": 2 00:07:09.216 } 00:07:09.216 ], 00:07:09.216 "driver_specific": {} 00:07:09.216 } 00:07:09.216 ]' 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:09.216 13:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.146 13:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.146 13:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.146 13:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.146 13:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:10.146 13:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:12.044 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:12.302 13:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:13.235 13:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:14.168 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:14.168 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:14.168 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:14.168 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.168 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.169 ************************************ 00:07:14.169 START TEST filesystem_ext4 00:07:14.169 ************************************ 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:14.169 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:14.169 mke2fs 1.46.5 (30-Dec-2021) 00:07:14.169 Discarding device blocks: 0/522240 done 00:07:14.169 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:14.169 Filesystem UUID: d50f1b99-4346-4858-a44d-ea8ad7f10762 00:07:14.169 Superblock backups stored on blocks: 00:07:14.169 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:14.169 00:07:14.169 Allocating group tables: 0/64 done 00:07:14.169 Writing inode tables: 0/64 done 00:07:14.427 Creating journal (8192 blocks): done 00:07:14.427 Writing superblocks and filesystem accounting information: 0/64 done 00:07:14.427 00:07:14.427 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:14.427 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.685 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.685 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:14.685 13:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3462242 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.685 00:07:14.685 real 0m0.537s 00:07:14.685 user 0m0.016s 00:07:14.685 sys 0m0.059s 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:14.685 ************************************ 00:07:14.685 END TEST filesystem_ext4 00:07:14.685 ************************************ 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.685 ************************************ 00:07:14.685 START TEST filesystem_btrfs 00:07:14.685 ************************************ 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:14.685 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:14.686 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:14.944 btrfs-progs v6.6.2 00:07:14.944 See https://btrfs.readthedocs.io for more information. 00:07:14.944 00:07:14.944 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:14.944 NOTE: several default settings have changed in version 5.15, please make sure 00:07:14.944 this does not affect your deployments: 00:07:14.944 - DUP for metadata (-m dup) 00:07:14.944 - enabled no-holes (-O no-holes) 00:07:14.944 - enabled free-space-tree (-R free-space-tree) 00:07:14.944 00:07:14.944 Label: (null) 00:07:14.944 UUID: 619bb346-5217-4b7e-b33d-442e79a61ee4 00:07:14.944 Node size: 16384 00:07:14.944 Sector size: 4096 00:07:14.944 Filesystem size: 510.00MiB 00:07:14.944 Block group profiles: 00:07:14.944 Data: single 8.00MiB 00:07:14.944 Metadata: DUP 32.00MiB 00:07:14.944 System: DUP 8.00MiB 00:07:14.944 SSD detected: yes 00:07:14.944 Zoned device: no 00:07:14.944 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:14.944 Runtime features: free-space-tree 00:07:14.944 Checksum: crc32c 00:07:14.944 Number of devices: 1 00:07:14.944 Devices: 00:07:14.944 ID SIZE PATH 00:07:14.944 1 510.00MiB /dev/nvme0n1p1 00:07:14.944 00:07:14.944 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:14.944 13:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:15.876 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3462242 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.135 00:07:16.135 real 0m1.292s 00:07:16.135 user 0m0.020s 00:07:16.135 sys 0m0.118s 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 ************************************ 00:07:16.135 END TEST filesystem_btrfs 00:07:16.135 ************************************ 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.135 ************************************ 00:07:16.135 START TEST filesystem_xfs 00:07:16.135 ************************************ 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:16.135 13:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:16.135 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:16.135 = sectsz=512 attr=2, projid32bit=1 00:07:16.135 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:16.135 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:16.135 data = bsize=4096 blocks=130560, imaxpct=25 00:07:16.135 = sunit=0 swidth=0 blks 00:07:16.135 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:16.135 log =internal log bsize=4096 blocks=16384, version=2 00:07:16.135 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:16.135 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:17.514 Discarding blocks...Done. 00:07:17.514 13:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:17.514 13:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3462242 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.413 00:07:19.413 real 0m3.335s 00:07:19.413 user 0m0.024s 00:07:19.413 sys 0m0.056s 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.413 ************************************ 00:07:19.413 END TEST filesystem_xfs 00:07:19.413 ************************************ 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:19.413 13:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:19.671 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:19.671 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3462242 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3462242 ']' 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3462242 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3462242 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3462242' 00:07:19.929 killing process with pid 3462242 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3462242 00:07:19.929 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3462242 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.495 00:07:20.495 real 0m11.699s 00:07:20.495 user 0m44.878s 00:07:20.495 sys 0m1.786s 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.495 ************************************ 00:07:20.495 END TEST nvmf_filesystem_no_in_capsule 00:07:20.495 ************************************ 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.495 13:15:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.496 ************************************ 00:07:20.496 START TEST nvmf_filesystem_in_capsule 00:07:20.496 ************************************ 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3464314 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3464314 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3464314 ']' 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.496 13:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.496 [2024-07-12 13:15:17.808398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:07:20.496 [2024-07-12 13:15:17.808488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.496 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.496 [2024-07-12 13:15:17.845130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.496 [2024-07-12 13:15:17.871043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.496 [2024-07-12 13:15:17.949992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.496 [2024-07-12 13:15:17.950047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.496 [2024-07-12 13:15:17.950060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.496 [2024-07-12 13:15:17.950086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.496 [2024-07-12 13:15:17.950096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.496 [2024-07-12 13:15:17.950182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.496 [2024-07-12 13:15:17.950289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.496 [2024-07-12 13:15:17.950379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.496 [2024-07-12 13:15:17.950382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.754 [2024-07-12 13:15:18.110321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.754 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 Malloc1 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.012 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.013 [2024-07-12 13:15:18.297813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:21.013 { 00:07:21.013 "name": "Malloc1", 00:07:21.013 "aliases": [ 00:07:21.013 "acbb08eb-85e2-4e7c-8c76-6b9f74d908f0" 00:07:21.013 ], 00:07:21.013 "product_name": "Malloc disk", 00:07:21.013 "block_size": 512, 00:07:21.013 "num_blocks": 1048576, 00:07:21.013 "uuid": "acbb08eb-85e2-4e7c-8c76-6b9f74d908f0", 00:07:21.013 "assigned_rate_limits": { 00:07:21.013 "rw_ios_per_sec": 0, 00:07:21.013 "rw_mbytes_per_sec": 0, 00:07:21.013 "r_mbytes_per_sec": 0, 00:07:21.013 "w_mbytes_per_sec": 0 00:07:21.013 }, 00:07:21.013 "claimed": true, 00:07:21.013 "claim_type": "exclusive_write", 00:07:21.013 "zoned": false, 00:07:21.013 "supported_io_types": { 00:07:21.013 "read": true, 00:07:21.013 "write": true, 00:07:21.013 "unmap": true, 00:07:21.013 "flush": true, 00:07:21.013 "reset": true, 00:07:21.013 "nvme_admin": false, 00:07:21.013 "nvme_io": false, 00:07:21.013 "nvme_io_md": false, 00:07:21.013 "write_zeroes": true, 00:07:21.013 "zcopy": true, 00:07:21.013 "get_zone_info": false, 00:07:21.013 "zone_management": false, 00:07:21.013 "zone_append": false, 00:07:21.013 "compare": false, 00:07:21.013 "compare_and_write": false, 00:07:21.013 "abort": true, 00:07:21.013 "seek_hole": false, 00:07:21.013 "seek_data": false, 00:07:21.013 "copy": true, 00:07:21.013 "nvme_iov_md": false 00:07:21.013 }, 00:07:21.013 "memory_domains": [ 00:07:21.013 { 00:07:21.013 "dma_device_id": "system", 00:07:21.013 "dma_device_type": 1 00:07:21.013 }, 00:07:21.013 { 00:07:21.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.013 "dma_device_type": 2 00:07:21.013 } 00:07:21.013 ], 00:07:21.013 "driver_specific": {} 00:07:21.013 } 00:07:21.013 ]' 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:21.013 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.578 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.578 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.578 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.578 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.578 13:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:24.137 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:24.701 13:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.632 ************************************ 00:07:25.632 START TEST filesystem_in_capsule_ext4 00:07:25.632 ************************************ 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:25.632 13:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.632 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.632 Discarding device blocks: 0/522240 done 00:07:25.889 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.889 Filesystem UUID: 57de30ea-6565-43a4-a66d-b0b89d9eb586 00:07:25.889 Superblock backups stored on blocks: 00:07:25.889 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.889 00:07:25.889 Allocating group tables: 0/64 done 00:07:25.889 Writing inode tables: 0/64 done 00:07:28.414 Creating journal (8192 blocks): done 00:07:28.414 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.414 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3464314 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.414 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.673 00:07:28.673 real 0m2.911s 00:07:28.673 user 0m0.019s 00:07:28.673 sys 0m0.053s 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 ************************************ 00:07:28.673 END TEST filesystem_in_capsule_ext4 00:07:28.673 ************************************ 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 ************************************ 00:07:28.673 START TEST filesystem_in_capsule_btrfs 00:07:28.673 ************************************ 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:28.673 13:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.673 btrfs-progs v6.6.2 00:07:28.673 See https://btrfs.readthedocs.io for more information. 00:07:28.673 00:07:28.673 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.673 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.673 this does not affect your deployments: 00:07:28.673 - DUP for metadata (-m dup) 00:07:28.673 - enabled no-holes (-O no-holes) 00:07:28.673 - enabled free-space-tree (-R free-space-tree) 00:07:28.673 00:07:28.673 Label: (null) 00:07:28.673 UUID: 1e897d0c-a45f-4fd6-97d0-fe6298995a4d 00:07:28.673 Node size: 16384 00:07:28.673 Sector size: 4096 00:07:28.673 Filesystem size: 510.00MiB 00:07:28.673 Block group profiles: 00:07:28.673 Data: single 8.00MiB 00:07:28.673 Metadata: DUP 32.00MiB 00:07:28.673 System: DUP 8.00MiB 00:07:28.673 SSD detected: yes 00:07:28.673 Zoned device: no 00:07:28.673 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.673 Runtime features: free-space-tree 00:07:28.673 Checksum: crc32c 00:07:28.673 Number of devices: 1 00:07:28.673 Devices: 00:07:28.673 ID SIZE PATH 00:07:28.673 1 510.00MiB /dev/nvme0n1p1 00:07:28.673 00:07:28.673 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:28.673 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3464314 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.607 00:07:29.607 real 0m0.958s 00:07:29.607 user 0m0.017s 00:07:29.607 sys 0m0.112s 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.607 ************************************ 00:07:29.607 END TEST filesystem_in_capsule_btrfs 00:07:29.607 ************************************ 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.607 ************************************ 00:07:29.607 START TEST filesystem_in_capsule_xfs 00:07:29.607 ************************************ 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:29.607 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:29.608 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:29.608 13:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.608 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.608 = sectsz=512 attr=2, projid32bit=1 00:07:29.608 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.608 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.608 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.608 = sunit=0 swidth=0 blks 00:07:29.608 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.608 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.608 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.608 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.979 Discarding blocks...Done. 00:07:30.979 13:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:30.979 13:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3464314 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.503 00:07:33.503 real 0m3.607s 00:07:33.503 user 0m0.010s 00:07:33.503 sys 0m0.065s 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.503 ************************************ 00:07:33.503 END TEST filesystem_in_capsule_xfs 00:07:33.503 ************************************ 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.503 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3464314 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3464314 ']' 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3464314 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464314 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464314' 00:07:33.504 killing process with pid 3464314 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3464314 00:07:33.504 13:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3464314 00:07:33.763 13:15:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.763 00:07:33.763 real 0m13.452s 00:07:33.763 user 0m51.753s 00:07:33.763 sys 0m1.940s 00:07:33.763 13:15:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.763 13:15:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.763 ************************************ 00:07:33.763 END TEST nvmf_filesystem_in_capsule 00:07:33.763 ************************************ 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.021 rmmod nvme_tcp 00:07:34.021 rmmod nvme_fabrics 00:07:34.021 rmmod nvme_keyring 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.021 13:15:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.928 13:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.928 00:07:35.928 real 0m29.690s 00:07:35.928 user 1m37.527s 00:07:35.928 sys 0m5.381s 00:07:35.928 13:15:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.928 13:15:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.928 ************************************ 00:07:35.928 END TEST nvmf_filesystem 00:07:35.928 ************************************ 00:07:35.928 13:15:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.928 13:15:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:35.928 13:15:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.928 13:15:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.928 13:15:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.928 ************************************ 00:07:35.928 START TEST nvmf_target_discovery 00:07:35.928 ************************************ 00:07:35.928 13:15:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:36.186 * Looking for test storage... 00:07:36.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.186 13:15:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.088 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:38.347 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:38.347 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:38.347 Found net devices under 0000:09:00.0: cvl_0_0 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:38.347 Found net devices under 0000:09:00.1: cvl_0_1 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.347 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:07:38.348 00:07:38.348 --- 10.0.0.2 ping statistics --- 00:07:38.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.348 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:38.348 00:07:38.348 --- 10.0.0.1 ping statistics --- 00:07:38.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.348 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3468050 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3468050 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3468050 ']' 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.348 13:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.348 [2024-07-12 13:15:35.786194] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:07:38.348 [2024-07-12 13:15:35.786287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.606 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.606 [2024-07-12 13:15:35.826999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.606 [2024-07-12 13:15:35.854025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.606 [2024-07-12 13:15:35.941578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.606 [2024-07-12 13:15:35.941634] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.606 [2024-07-12 13:15:35.941661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.606 [2024-07-12 13:15:35.941673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.606 [2024-07-12 13:15:35.941683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.606 [2024-07-12 13:15:35.941743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.606 [2024-07-12 13:15:35.941801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.606 [2024-07-12 13:15:35.941868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.606 [2024-07-12 13:15:35.941870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.606 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.606 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:38.606 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.606 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.606 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 [2024-07-12 13:15:36.095109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 Null1 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 [2024-07-12 13:15:36.135430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 Null2 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 Null3 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 Null4 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.864 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:07:39.122 00:07:39.122 Discovery Log Number of Records 6, Generation counter 6 00:07:39.122 =====Discovery Log Entry 0====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: current discovery subsystem 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4420 00:07:39.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: explicit discovery connections, duplicate discovery information 00:07:39.122 sectype: none 00:07:39.122 =====Discovery Log Entry 1====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: nvme subsystem 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4420 00:07:39.122 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: none 00:07:39.122 sectype: none 00:07:39.122 =====Discovery Log Entry 2====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: nvme subsystem 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4420 00:07:39.122 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: none 00:07:39.122 sectype: none 00:07:39.122 =====Discovery Log Entry 3====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: nvme subsystem 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4420 00:07:39.122 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: none 00:07:39.122 sectype: none 00:07:39.122 =====Discovery Log Entry 4====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: nvme subsystem 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4420 00:07:39.122 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: none 00:07:39.122 sectype: none 00:07:39.122 =====Discovery Log Entry 5====== 00:07:39.122 trtype: tcp 00:07:39.122 adrfam: ipv4 00:07:39.122 subtype: discovery subsystem referral 00:07:39.122 treq: not required 00:07:39.122 portid: 0 00:07:39.122 trsvcid: 4430 00:07:39.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:39.122 traddr: 10.0.0.2 00:07:39.122 eflags: none 00:07:39.122 sectype: none 00:07:39.122 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:39.122 Perform nvmf subsystem discovery via RPC 00:07:39.122 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:39.122 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.122 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.122 [ 00:07:39.122 { 00:07:39.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:39.123 "subtype": "Discovery", 00:07:39.123 "listen_addresses": [ 00:07:39.123 { 00:07:39.123 "trtype": "TCP", 00:07:39.123 "adrfam": "IPv4", 00:07:39.123 "traddr": "10.0.0.2", 00:07:39.123 "trsvcid": "4420" 00:07:39.123 } 00:07:39.123 ], 00:07:39.123 "allow_any_host": true, 00:07:39.123 "hosts": [] 00:07:39.123 }, 00:07:39.123 { 00:07:39.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.123 "subtype": "NVMe", 00:07:39.123 "listen_addresses": [ 00:07:39.123 { 00:07:39.123 "trtype": "TCP", 00:07:39.123 "adrfam": "IPv4", 00:07:39.123 "traddr": "10.0.0.2", 00:07:39.123 "trsvcid": "4420" 00:07:39.123 } 00:07:39.123 ], 00:07:39.123 "allow_any_host": true, 00:07:39.123 "hosts": [], 00:07:39.123 "serial_number": "SPDK00000000000001", 00:07:39.123 "model_number": "SPDK bdev Controller", 00:07:39.123 "max_namespaces": 32, 00:07:39.123 "min_cntlid": 1, 00:07:39.123 "max_cntlid": 65519, 00:07:39.123 "namespaces": [ 00:07:39.123 { 00:07:39.123 "nsid": 1, 00:07:39.123 "bdev_name": "Null1", 00:07:39.123 "name": "Null1", 00:07:39.123 "nguid": "8EFAFBA09ECA46CBA343E987DAC5CC38", 00:07:39.123 "uuid": "8efafba0-9eca-46cb-a343-e987dac5cc38" 00:07:39.123 } 00:07:39.123 ] 00:07:39.123 }, 00:07:39.123 { 00:07:39.123 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:39.123 "subtype": "NVMe", 00:07:39.123 "listen_addresses": [ 00:07:39.123 { 00:07:39.123 "trtype": "TCP", 00:07:39.123 "adrfam": "IPv4", 00:07:39.123 "traddr": "10.0.0.2", 00:07:39.123 "trsvcid": "4420" 00:07:39.123 } 00:07:39.123 ], 00:07:39.123 "allow_any_host": true, 00:07:39.123 "hosts": [], 00:07:39.123 "serial_number": "SPDK00000000000002", 00:07:39.123 "model_number": "SPDK bdev Controller", 00:07:39.123 "max_namespaces": 32, 00:07:39.123 "min_cntlid": 1, 00:07:39.123 "max_cntlid": 65519, 00:07:39.123 "namespaces": [ 00:07:39.123 { 00:07:39.123 "nsid": 1, 00:07:39.123 "bdev_name": "Null2", 00:07:39.123 "name": "Null2", 00:07:39.123 "nguid": "D6E0CFD501E1445A806F1BBFE87FE481", 00:07:39.123 "uuid": "d6e0cfd5-01e1-445a-806f-1bbfe87fe481" 00:07:39.123 } 00:07:39.123 ] 00:07:39.123 }, 00:07:39.123 { 00:07:39.123 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:39.123 "subtype": "NVMe", 00:07:39.123 "listen_addresses": [ 00:07:39.123 { 00:07:39.123 "trtype": "TCP", 00:07:39.123 "adrfam": "IPv4", 00:07:39.123 "traddr": "10.0.0.2", 00:07:39.123 "trsvcid": "4420" 00:07:39.123 } 00:07:39.123 ], 00:07:39.123 "allow_any_host": true, 00:07:39.123 "hosts": [], 00:07:39.123 "serial_number": "SPDK00000000000003", 00:07:39.123 "model_number": "SPDK bdev Controller", 00:07:39.123 "max_namespaces": 32, 00:07:39.123 "min_cntlid": 1, 00:07:39.123 "max_cntlid": 65519, 00:07:39.123 "namespaces": [ 00:07:39.123 { 00:07:39.123 "nsid": 1, 00:07:39.123 "bdev_name": "Null3", 00:07:39.123 "name": "Null3", 00:07:39.123 "nguid": "28D53BE4CE774234A2A1624B235D1FB7", 00:07:39.123 "uuid": "28d53be4-ce77-4234-a2a1-624b235d1fb7" 00:07:39.123 } 00:07:39.123 ] 00:07:39.123 }, 00:07:39.123 { 00:07:39.123 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:39.123 "subtype": "NVMe", 00:07:39.123 "listen_addresses": [ 00:07:39.123 { 00:07:39.123 "trtype": "TCP", 00:07:39.123 "adrfam": "IPv4", 00:07:39.123 "traddr": "10.0.0.2", 00:07:39.123 "trsvcid": "4420" 00:07:39.123 } 00:07:39.123 ], 00:07:39.123 "allow_any_host": true, 00:07:39.123 "hosts": [], 00:07:39.123 "serial_number": "SPDK00000000000004", 00:07:39.123 "model_number": "SPDK bdev Controller", 00:07:39.123 "max_namespaces": 32, 00:07:39.123 "min_cntlid": 1, 00:07:39.123 "max_cntlid": 65519, 00:07:39.123 "namespaces": [ 00:07:39.123 { 00:07:39.123 "nsid": 1, 00:07:39.123 "bdev_name": "Null4", 00:07:39.123 "name": "Null4", 00:07:39.123 "nguid": "20C98C7BF3EF4E27B5C414A8F8B54808", 00:07:39.123 "uuid": "20c98c7b-f3ef-4e27-b5c4-14a8f8b54808" 00:07:39.123 } 00:07:39.123 ] 00:07:39.123 } 00:07:39.123 ] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.123 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.381 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.382 rmmod nvme_tcp 00:07:39.382 rmmod nvme_fabrics 00:07:39.382 rmmod nvme_keyring 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3468050 ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3468050 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3468050 ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3468050 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3468050 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3468050' 00:07:39.382 killing process with pid 3468050 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3468050 00:07:39.382 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3468050 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.648 13:15:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.588 13:15:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.588 00:07:41.588 real 0m5.558s 00:07:41.588 user 0m4.663s 00:07:41.588 sys 0m1.892s 00:07:41.588 13:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.588 13:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.588 ************************************ 00:07:41.588 END TEST nvmf_target_discovery 00:07:41.588 ************************************ 00:07:41.588 13:15:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:41.588 13:15:38 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:41.588 13:15:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:41.588 13:15:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.588 13:15:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.588 ************************************ 00:07:41.588 START TEST nvmf_referrals 00:07:41.588 ************************************ 00:07:41.588 13:15:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:41.588 * Looking for test storage... 00:07:41.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.588 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.588 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:41.588 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.589 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.846 13:15:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:43.748 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:43.748 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:43.748 Found net devices under 0000:09:00.0: cvl_0_0 00:07:43.748 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:44.007 Found net devices under 0000:09:00.1: cvl_0_1 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:07:44.007 00:07:44.007 --- 10.0.0.2 ping statistics --- 00:07:44.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.007 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:44.007 00:07:44.007 --- 10.0.0.1 ping statistics --- 00:07:44.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.007 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3470135 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3470135 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3470135 ']' 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.007 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.007 [2024-07-12 13:15:41.426165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:07:44.007 [2024-07-12 13:15:41.426257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.007 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.007 [2024-07-12 13:15:41.462494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.265 [2024-07-12 13:15:41.489071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.265 [2024-07-12 13:15:41.572969] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.265 [2024-07-12 13:15:41.573012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.265 [2024-07-12 13:15:41.573039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.265 [2024-07-12 13:15:41.573050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.265 [2024-07-12 13:15:41.573058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.265 [2024-07-12 13:15:41.573208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.265 [2024-07-12 13:15:41.573336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.265 [2024-07-12 13:15:41.573371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.265 [2024-07-12 13:15:41.573374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.265 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.265 [2024-07-12 13:15:41.736954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 [2024-07-12 13:15:41.749154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.523 13:15:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.781 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.039 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.297 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.555 13:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.812 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.812 rmmod nvme_tcp 00:07:45.812 rmmod nvme_fabrics 00:07:45.812 rmmod nvme_keyring 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3470135 ']' 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3470135 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3470135 ']' 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3470135 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3470135 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3470135' 00:07:46.069 killing process with pid 3470135 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3470135 00:07:46.069 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3470135 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.328 13:15:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.236 13:15:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.236 00:07:48.236 real 0m6.588s 00:07:48.236 user 0m9.229s 00:07:48.236 sys 0m2.240s 00:07:48.236 13:15:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.236 13:15:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.236 ************************************ 00:07:48.236 END TEST nvmf_referrals 00:07:48.236 ************************************ 00:07:48.236 13:15:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.236 13:15:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.236 13:15:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.236 13:15:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.236 13:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.236 ************************************ 00:07:48.236 START TEST nvmf_connect_disconnect 00:07:48.236 ************************************ 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.236 * Looking for test storage... 00:07:48.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.236 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.237 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.495 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.496 13:15:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.398 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:50.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:50.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:50.399 Found net devices under 0000:09:00.0: cvl_0_0 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:50.399 Found net devices under 0000:09:00.1: cvl_0_1 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.399 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.657 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.657 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.657 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:07:50.657 00:07:50.657 --- 10.0.0.2 ping statistics --- 00:07:50.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.657 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:07:50.657 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:07:50.657 00:07:50.657 --- 10.0.0.1 ping statistics --- 00:07:50.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.658 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3472322 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3472322 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3472322 ']' 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.658 13:15:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.658 [2024-07-12 13:15:48.000909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:07:50.658 [2024-07-12 13:15:48.000995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.658 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.658 [2024-07-12 13:15:48.037507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.658 [2024-07-12 13:15:48.063326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.916 [2024-07-12 13:15:48.156230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.916 [2024-07-12 13:15:48.156279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.916 [2024-07-12 13:15:48.156327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.916 [2024-07-12 13:15:48.156346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.916 [2024-07-12 13:15:48.156373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.916 [2024-07-12 13:15:48.156425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.916 [2024-07-12 13:15:48.156487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.916 [2024-07-12 13:15:48.156539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.916 [2024-07-12 13:15:48.156537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 [2024-07-12 13:15:48.318940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.917 [2024-07-12 13:15:48.375903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:50.917 13:15:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:53.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.135 rmmod nvme_tcp 00:11:42.135 rmmod nvme_fabrics 00:11:42.135 rmmod nvme_keyring 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3472322 ']' 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3472322 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3472322 ']' 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3472322 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.135 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472322 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472322' 00:11:42.394 killing process with pid 3472322 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3472322 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3472322 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.394 13:19:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.930 13:19:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.930 00:11:44.930 real 3m56.254s 00:11:44.930 user 14m58.917s 00:11:44.930 sys 0m35.289s 00:11:44.930 13:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.930 13:19:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.930 ************************************ 00:11:44.930 END TEST nvmf_connect_disconnect 00:11:44.930 ************************************ 00:11:44.930 13:19:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:44.930 13:19:41 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:44.930 13:19:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.930 13:19:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.930 13:19:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.930 ************************************ 00:11:44.930 START TEST nvmf_multitarget 00:11:44.930 ************************************ 00:11:44.930 13:19:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:44.930 * Looking for test storage... 00:11:44.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.930 13:19:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.930 13:19:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.930 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:44.931 13:19:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.833 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:46.834 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:46.834 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:46.834 Found net devices under 0000:09:00.0: cvl_0_0 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:46.834 Found net devices under 0000:09:00.1: cvl_0_1 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:11:46.834 00:11:46.834 --- 10.0.0.2 ping statistics --- 00:11:46.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.834 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:46.834 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:47.093 00:11:47.093 --- 10.0.0.1 ping statistics --- 00:11:47.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.093 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3503544 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3503544 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3503544 ']' 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.093 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.093 [2024-07-12 13:19:44.380568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:11:47.093 [2024-07-12 13:19:44.380653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.093 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.093 [2024-07-12 13:19:44.417071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:47.093 [2024-07-12 13:19:44.443420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.093 [2024-07-12 13:19:44.523650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.093 [2024-07-12 13:19:44.523703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.093 [2024-07-12 13:19:44.523730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.093 [2024-07-12 13:19:44.523741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.093 [2024-07-12 13:19:44.523751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.093 [2024-07-12 13:19:44.523833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.093 [2024-07-12 13:19:44.523901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.093 [2024-07-12 13:19:44.523965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.093 [2024-07-12 13:19:44.523969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:47.351 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:47.609 "nvmf_tgt_1" 00:11:47.609 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:47.609 "nvmf_tgt_2" 00:11:47.609 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:47.609 13:19:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:47.866 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:47.866 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:47.866 true 00:11:47.866 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:47.866 true 00:11:47.866 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:47.866 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.124 rmmod nvme_tcp 00:11:48.124 rmmod nvme_fabrics 00:11:48.124 rmmod nvme_keyring 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3503544 ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3503544 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3503544 ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3503544 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3503544 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3503544' 00:11:48.124 killing process with pid 3503544 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3503544 00:11:48.124 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3503544 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.382 13:19:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.919 13:19:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.919 00:11:50.919 real 0m5.857s 00:11:50.919 user 0m6.381s 00:11:50.919 sys 0m2.033s 00:11:50.919 13:19:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.919 13:19:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.919 ************************************ 00:11:50.919 END TEST nvmf_multitarget 00:11:50.919 ************************************ 00:11:50.919 13:19:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.919 13:19:47 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:50.919 13:19:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.919 13:19:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.919 13:19:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.919 ************************************ 00:11:50.919 START TEST nvmf_rpc 00:11:50.919 ************************************ 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:50.919 * Looking for test storage... 00:11:50.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.919 13:19:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:52.864 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:52.864 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:52.864 Found net devices under 0000:09:00.0: cvl_0_0 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:52.864 Found net devices under 0000:09:00.1: cvl_0_1 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.864 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:11:52.865 00:11:52.865 --- 10.0.0.2 ping statistics --- 00:11:52.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.865 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:11:52.865 00:11:52.865 --- 10.0.0.1 ping statistics --- 00:11:52.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.865 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3505641 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3505641 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3505641 ']' 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.865 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.865 [2024-07-12 13:19:50.333457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:11:52.865 [2024-07-12 13:19:50.333536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.122 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.122 [2024-07-12 13:19:50.368951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:53.122 [2024-07-12 13:19:50.397216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.122 [2024-07-12 13:19:50.487825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.122 [2024-07-12 13:19:50.487889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.122 [2024-07-12 13:19:50.487903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.122 [2024-07-12 13:19:50.487915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.122 [2024-07-12 13:19:50.487926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.122 [2024-07-12 13:19:50.488007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.122 [2024-07-12 13:19:50.488074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.122 [2024-07-12 13:19:50.488126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.122 [2024-07-12 13:19:50.488128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.379 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:53.379 "tick_rate": 2700000000, 00:11:53.379 "poll_groups": [ 00:11:53.379 { 00:11:53.379 "name": "nvmf_tgt_poll_group_000", 00:11:53.379 "admin_qpairs": 0, 00:11:53.379 "io_qpairs": 0, 00:11:53.379 "current_admin_qpairs": 0, 00:11:53.379 "current_io_qpairs": 0, 00:11:53.379 "pending_bdev_io": 0, 00:11:53.379 "completed_nvme_io": 0, 00:11:53.379 "transports": [] 00:11:53.379 }, 00:11:53.379 { 00:11:53.379 "name": "nvmf_tgt_poll_group_001", 00:11:53.379 "admin_qpairs": 0, 00:11:53.379 "io_qpairs": 0, 00:11:53.379 "current_admin_qpairs": 0, 00:11:53.379 "current_io_qpairs": 0, 00:11:53.379 "pending_bdev_io": 0, 00:11:53.379 "completed_nvme_io": 0, 00:11:53.379 "transports": [] 00:11:53.379 }, 00:11:53.379 { 00:11:53.379 "name": "nvmf_tgt_poll_group_002", 00:11:53.379 "admin_qpairs": 0, 00:11:53.379 "io_qpairs": 0, 00:11:53.379 "current_admin_qpairs": 0, 00:11:53.379 "current_io_qpairs": 0, 00:11:53.379 "pending_bdev_io": 0, 00:11:53.379 "completed_nvme_io": 0, 00:11:53.379 "transports": [] 00:11:53.379 }, 00:11:53.379 { 00:11:53.379 "name": "nvmf_tgt_poll_group_003", 00:11:53.379 "admin_qpairs": 0, 00:11:53.379 "io_qpairs": 0, 00:11:53.379 "current_admin_qpairs": 0, 00:11:53.379 "current_io_qpairs": 0, 00:11:53.379 "pending_bdev_io": 0, 00:11:53.379 "completed_nvme_io": 0, 00:11:53.379 "transports": [] 00:11:53.379 } 00:11:53.379 ] 00:11:53.380 }' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.380 [2024-07-12 13:19:50.732417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:53.380 "tick_rate": 2700000000, 00:11:53.380 "poll_groups": [ 00:11:53.380 { 00:11:53.380 "name": "nvmf_tgt_poll_group_000", 00:11:53.380 "admin_qpairs": 0, 00:11:53.380 "io_qpairs": 0, 00:11:53.380 "current_admin_qpairs": 0, 00:11:53.380 "current_io_qpairs": 0, 00:11:53.380 "pending_bdev_io": 0, 00:11:53.380 "completed_nvme_io": 0, 00:11:53.380 "transports": [ 00:11:53.380 { 00:11:53.380 "trtype": "TCP" 00:11:53.380 } 00:11:53.380 ] 00:11:53.380 }, 00:11:53.380 { 00:11:53.380 "name": "nvmf_tgt_poll_group_001", 00:11:53.380 "admin_qpairs": 0, 00:11:53.380 "io_qpairs": 0, 00:11:53.380 "current_admin_qpairs": 0, 00:11:53.380 "current_io_qpairs": 0, 00:11:53.380 "pending_bdev_io": 0, 00:11:53.380 "completed_nvme_io": 0, 00:11:53.380 "transports": [ 00:11:53.380 { 00:11:53.380 "trtype": "TCP" 00:11:53.380 } 00:11:53.380 ] 00:11:53.380 }, 00:11:53.380 { 00:11:53.380 "name": "nvmf_tgt_poll_group_002", 00:11:53.380 "admin_qpairs": 0, 00:11:53.380 "io_qpairs": 0, 00:11:53.380 "current_admin_qpairs": 0, 00:11:53.380 "current_io_qpairs": 0, 00:11:53.380 "pending_bdev_io": 0, 00:11:53.380 "completed_nvme_io": 0, 00:11:53.380 "transports": [ 00:11:53.380 { 00:11:53.380 "trtype": "TCP" 00:11:53.380 } 00:11:53.380 ] 00:11:53.380 }, 00:11:53.380 { 00:11:53.380 "name": "nvmf_tgt_poll_group_003", 00:11:53.380 "admin_qpairs": 0, 00:11:53.380 "io_qpairs": 0, 00:11:53.380 "current_admin_qpairs": 0, 00:11:53.380 "current_io_qpairs": 0, 00:11:53.380 "pending_bdev_io": 0, 00:11:53.380 "completed_nvme_io": 0, 00:11:53.380 "transports": [ 00:11:53.380 { 00:11:53.380 "trtype": "TCP" 00:11:53.380 } 00:11:53.380 ] 00:11:53.380 } 00:11:53.380 ] 00:11:53.380 }' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.380 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 Malloc1 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 [2024-07-12 13:19:50.894405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:53.638 [2024-07-12 13:19:50.916863] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:53.638 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:53.638 could not add new controller: failed to write to nvme-fabrics device 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.638 13:19:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.200 13:19:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.200 13:19:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.200 13:19:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.200 13:19:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.200 13:19:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:56.095 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.353 [2024-07-12 13:19:53.706859] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:56.353 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:56.353 could not add new controller: failed to write to nvme-fabrics device 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.353 13:19:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.285 13:19:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.285 13:19:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.285 13:19:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.285 13:19:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.285 13:19:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.180 [2024-07-12 13:19:56.529152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.180 13:19:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.744 13:19:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.744 13:19:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.744 13:19:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.744 13:19:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.744 13:19:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.269 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 [2024-07-12 13:19:59.291947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.270 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.527 13:19:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.527 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.527 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.527 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.527 13:19:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:05.052 13:20:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 [2024-07-12 13:20:02.072099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.052 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.310 13:20:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.310 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.310 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.310 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:05.310 13:20:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 [2024-07-12 13:20:04.805224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.833 13:20:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.091 13:20:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.091 13:20:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.091 13:20:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.091 13:20:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.091 13:20:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:09.988 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 [2024-07-12 13:20:07.582267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.246 13:20:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.812 13:20:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.812 13:20:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.812 13:20:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.812 13:20:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:10.812 13:20:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 [2024-07-12 13:20:10.364686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 [2024-07-12 13:20:10.412746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 [2024-07-12 13:20:10.460919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 [2024-07-12 13:20:10.509092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 [2024-07-12 13:20:10.557271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:13.340 "tick_rate": 2700000000, 00:12:13.340 "poll_groups": [ 00:12:13.340 { 00:12:13.340 "name": "nvmf_tgt_poll_group_000", 00:12:13.340 "admin_qpairs": 2, 00:12:13.340 "io_qpairs": 84, 00:12:13.340 "current_admin_qpairs": 0, 00:12:13.340 "current_io_qpairs": 0, 00:12:13.340 "pending_bdev_io": 0, 00:12:13.340 "completed_nvme_io": 160, 00:12:13.340 "transports": [ 00:12:13.340 { 00:12:13.340 "trtype": "TCP" 00:12:13.340 } 00:12:13.340 ] 00:12:13.340 }, 00:12:13.340 { 00:12:13.340 "name": "nvmf_tgt_poll_group_001", 00:12:13.340 "admin_qpairs": 2, 00:12:13.340 "io_qpairs": 84, 00:12:13.340 "current_admin_qpairs": 0, 00:12:13.340 "current_io_qpairs": 0, 00:12:13.340 "pending_bdev_io": 0, 00:12:13.340 "completed_nvme_io": 234, 00:12:13.340 "transports": [ 00:12:13.340 { 00:12:13.340 "trtype": "TCP" 00:12:13.340 } 00:12:13.340 ] 00:12:13.340 }, 00:12:13.340 { 00:12:13.340 "name": "nvmf_tgt_poll_group_002", 00:12:13.340 "admin_qpairs": 1, 00:12:13.340 "io_qpairs": 84, 00:12:13.340 "current_admin_qpairs": 0, 00:12:13.340 "current_io_qpairs": 0, 00:12:13.340 "pending_bdev_io": 0, 00:12:13.340 "completed_nvme_io": 207, 00:12:13.340 "transports": [ 00:12:13.340 { 00:12:13.340 "trtype": "TCP" 00:12:13.340 } 00:12:13.340 ] 00:12:13.340 }, 00:12:13.340 { 00:12:13.340 "name": "nvmf_tgt_poll_group_003", 00:12:13.340 "admin_qpairs": 2, 00:12:13.340 "io_qpairs": 84, 00:12:13.340 "current_admin_qpairs": 0, 00:12:13.340 "current_io_qpairs": 0, 00:12:13.340 "pending_bdev_io": 0, 00:12:13.340 "completed_nvme_io": 85, 00:12:13.340 "transports": [ 00:12:13.340 { 00:12:13.340 "trtype": "TCP" 00:12:13.340 } 00:12:13.340 ] 00:12:13.340 } 00:12:13.340 ] 00:12:13.340 }' 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:13.340 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.341 rmmod nvme_tcp 00:12:13.341 rmmod nvme_fabrics 00:12:13.341 rmmod nvme_keyring 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3505641 ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3505641 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3505641 ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3505641 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3505641 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3505641' 00:12:13.341 killing process with pid 3505641 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3505641 00:12:13.341 13:20:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3505641 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.600 13:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.145 13:20:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.145 00:12:16.145 real 0m25.230s 00:12:16.145 user 1m21.513s 00:12:16.145 sys 0m4.170s 00:12:16.145 13:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.145 13:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.145 ************************************ 00:12:16.145 END TEST nvmf_rpc 00:12:16.145 ************************************ 00:12:16.145 13:20:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:16.145 13:20:13 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:16.145 13:20:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.145 13:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.145 13:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.145 ************************************ 00:12:16.145 START TEST nvmf_invalid 00:12:16.145 ************************************ 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:16.145 * Looking for test storage... 00:12:16.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.145 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.146 13:20:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:18.056 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:18.056 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:18.056 Found net devices under 0000:09:00.0: cvl_0_0 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:18.056 Found net devices under 0000:09:00.1: cvl_0_1 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:12:18.056 00:12:18.056 --- 10.0.0.2 ping statistics --- 00:12:18.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.056 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:18.056 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:12:18.056 00:12:18.056 --- 10.0.0.1 ping statistics --- 00:12:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.057 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3510128 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3510128 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3510128 ']' 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.057 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:18.057 [2024-07-12 13:20:15.470988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:12:18.057 [2024-07-12 13:20:15.471070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.057 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.057 [2024-07-12 13:20:15.509696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:18.365 [2024-07-12 13:20:15.541192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.365 [2024-07-12 13:20:15.630793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.365 [2024-07-12 13:20:15.630859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.365 [2024-07-12 13:20:15.630872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.365 [2024-07-12 13:20:15.630883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.365 [2024-07-12 13:20:15.630893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.365 [2024-07-12 13:20:15.630980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.365 [2024-07-12 13:20:15.631056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.365 [2024-07-12 13:20:15.631124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.365 [2024-07-12 13:20:15.631127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:18.365 13:20:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21762 00:12:18.623 [2024-07-12 13:20:16.013610] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:18.623 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:18.623 { 00:12:18.623 "nqn": "nqn.2016-06.io.spdk:cnode21762", 00:12:18.623 "tgt_name": "foobar", 00:12:18.623 "method": "nvmf_create_subsystem", 00:12:18.623 "req_id": 1 00:12:18.623 } 00:12:18.623 Got JSON-RPC error response 00:12:18.623 response: 00:12:18.623 { 00:12:18.623 "code": -32603, 00:12:18.623 "message": "Unable to find target foobar" 00:12:18.623 }' 00:12:18.623 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:18.623 { 00:12:18.623 "nqn": "nqn.2016-06.io.spdk:cnode21762", 00:12:18.623 "tgt_name": "foobar", 00:12:18.623 "method": "nvmf_create_subsystem", 00:12:18.623 "req_id": 1 00:12:18.623 } 00:12:18.623 Got JSON-RPC error response 00:12:18.623 response: 00:12:18.623 { 00:12:18.623 "code": -32603, 00:12:18.623 "message": "Unable to find target foobar" 00:12:18.623 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:18.623 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:18.623 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22574 00:12:18.880 [2024-07-12 13:20:16.310653] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22574: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:18.880 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:18.880 { 00:12:18.880 "nqn": "nqn.2016-06.io.spdk:cnode22574", 00:12:18.880 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:18.880 "method": "nvmf_create_subsystem", 00:12:18.880 "req_id": 1 00:12:18.880 } 00:12:18.880 Got JSON-RPC error response 00:12:18.880 response: 00:12:18.880 { 00:12:18.880 "code": -32602, 00:12:18.880 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:18.880 }' 00:12:18.880 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:18.880 { 00:12:18.880 "nqn": "nqn.2016-06.io.spdk:cnode22574", 00:12:18.880 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:18.880 "method": "nvmf_create_subsystem", 00:12:18.880 "req_id": 1 00:12:18.880 } 00:12:18.880 Got JSON-RPC error response 00:12:18.880 response: 00:12:18.880 { 00:12:18.880 "code": -32602, 00:12:18.880 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:18.880 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:18.881 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:18.881 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28080 00:12:19.139 [2024-07-12 13:20:16.587561] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28080: invalid model number 'SPDK_Controller' 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:19.139 { 00:12:19.139 "nqn": "nqn.2016-06.io.spdk:cnode28080", 00:12:19.139 "model_number": "SPDK_Controller\u001f", 00:12:19.139 "method": "nvmf_create_subsystem", 00:12:19.139 "req_id": 1 00:12:19.139 } 00:12:19.139 Got JSON-RPC error response 00:12:19.139 response: 00:12:19.139 { 00:12:19.139 "code": -32602, 00:12:19.139 "message": "Invalid MN SPDK_Controller\u001f" 00:12:19.139 }' 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:19.139 { 00:12:19.139 "nqn": "nqn.2016-06.io.spdk:cnode28080", 00:12:19.139 "model_number": "SPDK_Controller\u001f", 00:12:19.139 "method": "nvmf_create_subsystem", 00:12:19.139 "req_id": 1 00:12:19.139 } 00:12:19.139 Got JSON-RPC error response 00:12:19.139 response: 00:12:19.139 { 00:12:19.139 "code": -32602, 00:12:19.139 "message": "Invalid MN SPDK_Controller\u001f" 00:12:19.139 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:19.139 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Fp#U5Jru22[JK%?_nz3J' 00:12:19.397 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Fp#U5Jru22[JK%?_nz3J' nqn.2016-06.io.spdk:cnode24492 00:12:19.656 [2024-07-12 13:20:16.892651] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24492: invalid serial number 'Fp#U5Jru22[JK%?_nz3J' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:19.656 { 00:12:19.656 "nqn": "nqn.2016-06.io.spdk:cnode24492", 00:12:19.656 "serial_number": "Fp#U\u007f5Jru22[JK%?_nz3J", 00:12:19.656 "method": "nvmf_create_subsystem", 00:12:19.656 "req_id": 1 00:12:19.656 } 00:12:19.656 Got JSON-RPC error response 00:12:19.656 response: 00:12:19.656 { 00:12:19.656 "code": -32602, 00:12:19.656 "message": "Invalid SN Fp#U\u007f5Jru22[JK%?_nz3J" 00:12:19.656 }' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:19.656 { 00:12:19.656 "nqn": "nqn.2016-06.io.spdk:cnode24492", 00:12:19.656 "serial_number": "Fp#U\u007f5Jru22[JK%?_nz3J", 00:12:19.656 "method": "nvmf_create_subsystem", 00:12:19.656 "req_id": 1 00:12:19.656 } 00:12:19.656 Got JSON-RPC error response 00:12:19.656 response: 00:12:19.656 { 00:12:19.656 "code": -32602, 00:12:19.656 "message": "Invalid SN Fp#U\u007f5Jru22[JK%?_nz3J" 00:12:19.656 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.656 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:19.657 13:20:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.657 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b{:{UTu0?+dt"7g3r!V,wT'\''|Q~C-jrH$#k\*V|ag' 00:12:19.658 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'b{:{UTu0?+dt"7g3r!V,wT'\''|Q~C-jrH$#k\*V|ag' nqn.2016-06.io.spdk:cnode30068 00:12:19.915 [2024-07-12 13:20:17.285963] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30068: invalid model number 'b{:{UTu0?+dt"7g3r!V,wT'|Q~C-jrH$#k\*V|ag' 00:12:19.915 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:19.915 { 00:12:19.915 "nqn": "nqn.2016-06.io.spdk:cnode30068", 00:12:19.915 "model_number": "b{:{UTu0?+dt\"7g3r!V,wT'\''|Q~C-jrH$#k\u007f\\*V|ag", 00:12:19.915 "method": "nvmf_create_subsystem", 00:12:19.915 "req_id": 1 00:12:19.915 } 00:12:19.915 Got JSON-RPC error response 00:12:19.915 response: 00:12:19.915 { 00:12:19.915 "code": -32602, 00:12:19.915 "message": "Invalid MN b{:{UTu0?+dt\"7g3r!V,wT'\''|Q~C-jrH$#k\u007f\\*V|ag" 00:12:19.915 }' 00:12:19.915 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:19.915 { 00:12:19.915 "nqn": "nqn.2016-06.io.spdk:cnode30068", 00:12:19.915 "model_number": "b{:{UTu0?+dt\"7g3r!V,wT'|Q~C-jrH$#k\u007f\\*V|ag", 00:12:19.915 "method": "nvmf_create_subsystem", 00:12:19.915 "req_id": 1 00:12:19.915 } 00:12:19.915 Got JSON-RPC error response 00:12:19.916 response: 00:12:19.916 { 00:12:19.916 "code": -32602, 00:12:19.916 "message": "Invalid MN b{:{UTu0?+dt\"7g3r!V,wT'|Q~C-jrH$#k\u007f\\*V|ag" 00:12:19.916 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:19.916 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:20.173 [2024-07-12 13:20:17.522816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.173 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:20.430 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:20.431 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:20.431 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:20.431 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:20.431 13:20:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:20.689 [2024-07-12 13:20:18.032520] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:20.689 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:20.689 { 00:12:20.689 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:20.689 "listen_address": { 00:12:20.689 "trtype": "tcp", 00:12:20.689 "traddr": "", 00:12:20.689 "trsvcid": "4421" 00:12:20.689 }, 00:12:20.689 "method": "nvmf_subsystem_remove_listener", 00:12:20.689 "req_id": 1 00:12:20.689 } 00:12:20.689 Got JSON-RPC error response 00:12:20.689 response: 00:12:20.689 { 00:12:20.689 "code": -32602, 00:12:20.689 "message": "Invalid parameters" 00:12:20.689 }' 00:12:20.689 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:20.689 { 00:12:20.689 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:20.689 "listen_address": { 00:12:20.689 "trtype": "tcp", 00:12:20.689 "traddr": "", 00:12:20.689 "trsvcid": "4421" 00:12:20.689 }, 00:12:20.689 "method": "nvmf_subsystem_remove_listener", 00:12:20.689 "req_id": 1 00:12:20.689 } 00:12:20.689 Got JSON-RPC error response 00:12:20.689 response: 00:12:20.689 { 00:12:20.689 "code": -32602, 00:12:20.689 "message": "Invalid parameters" 00:12:20.689 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:20.689 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6724 -i 0 00:12:20.948 [2024-07-12 13:20:18.281335] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6724: invalid cntlid range [0-65519] 00:12:20.948 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:20.948 { 00:12:20.948 "nqn": "nqn.2016-06.io.spdk:cnode6724", 00:12:20.948 "min_cntlid": 0, 00:12:20.948 "method": "nvmf_create_subsystem", 00:12:20.948 "req_id": 1 00:12:20.948 } 00:12:20.948 Got JSON-RPC error response 00:12:20.948 response: 00:12:20.948 { 00:12:20.948 "code": -32602, 00:12:20.948 "message": "Invalid cntlid range [0-65519]" 00:12:20.948 }' 00:12:20.948 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:20.948 { 00:12:20.948 "nqn": "nqn.2016-06.io.spdk:cnode6724", 00:12:20.948 "min_cntlid": 0, 00:12:20.948 "method": "nvmf_create_subsystem", 00:12:20.948 "req_id": 1 00:12:20.948 } 00:12:20.948 Got JSON-RPC error response 00:12:20.948 response: 00:12:20.948 { 00:12:20.948 "code": -32602, 00:12:20.948 "message": "Invalid cntlid range [0-65519]" 00:12:20.948 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:20.948 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26684 -i 65520 00:12:21.206 [2024-07-12 13:20:18.522188] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26684: invalid cntlid range [65520-65519] 00:12:21.206 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:21.206 { 00:12:21.206 "nqn": "nqn.2016-06.io.spdk:cnode26684", 00:12:21.206 "min_cntlid": 65520, 00:12:21.206 "method": "nvmf_create_subsystem", 00:12:21.206 "req_id": 1 00:12:21.206 } 00:12:21.206 Got JSON-RPC error response 00:12:21.206 response: 00:12:21.206 { 00:12:21.206 "code": -32602, 00:12:21.206 "message": "Invalid cntlid range [65520-65519]" 00:12:21.206 }' 00:12:21.206 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:21.206 { 00:12:21.206 "nqn": "nqn.2016-06.io.spdk:cnode26684", 00:12:21.206 "min_cntlid": 65520, 00:12:21.206 "method": "nvmf_create_subsystem", 00:12:21.206 "req_id": 1 00:12:21.206 } 00:12:21.206 Got JSON-RPC error response 00:12:21.206 response: 00:12:21.206 { 00:12:21.206 "code": -32602, 00:12:21.206 "message": "Invalid cntlid range [65520-65519]" 00:12:21.206 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:21.206 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode460 -I 0 00:12:21.464 [2024-07-12 13:20:18.767023] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode460: invalid cntlid range [1-0] 00:12:21.464 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:21.464 { 00:12:21.464 "nqn": "nqn.2016-06.io.spdk:cnode460", 00:12:21.464 "max_cntlid": 0, 00:12:21.464 "method": "nvmf_create_subsystem", 00:12:21.464 "req_id": 1 00:12:21.464 } 00:12:21.464 Got JSON-RPC error response 00:12:21.464 response: 00:12:21.464 { 00:12:21.464 "code": -32602, 00:12:21.464 "message": "Invalid cntlid range [1-0]" 00:12:21.464 }' 00:12:21.464 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:21.464 { 00:12:21.464 "nqn": "nqn.2016-06.io.spdk:cnode460", 00:12:21.464 "max_cntlid": 0, 00:12:21.464 "method": "nvmf_create_subsystem", 00:12:21.464 "req_id": 1 00:12:21.464 } 00:12:21.464 Got JSON-RPC error response 00:12:21.464 response: 00:12:21.464 { 00:12:21.464 "code": -32602, 00:12:21.464 "message": "Invalid cntlid range [1-0]" 00:12:21.464 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:21.464 13:20:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2028 -I 65520 00:12:21.721 [2024-07-12 13:20:19.007815] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2028: invalid cntlid range [1-65520] 00:12:21.721 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:21.721 { 00:12:21.721 "nqn": "nqn.2016-06.io.spdk:cnode2028", 00:12:21.721 "max_cntlid": 65520, 00:12:21.721 "method": "nvmf_create_subsystem", 00:12:21.721 "req_id": 1 00:12:21.721 } 00:12:21.721 Got JSON-RPC error response 00:12:21.721 response: 00:12:21.721 { 00:12:21.721 "code": -32602, 00:12:21.721 "message": "Invalid cntlid range [1-65520]" 00:12:21.721 }' 00:12:21.721 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:21.721 { 00:12:21.721 "nqn": "nqn.2016-06.io.spdk:cnode2028", 00:12:21.721 "max_cntlid": 65520, 00:12:21.721 "method": "nvmf_create_subsystem", 00:12:21.721 "req_id": 1 00:12:21.721 } 00:12:21.721 Got JSON-RPC error response 00:12:21.721 response: 00:12:21.721 { 00:12:21.721 "code": -32602, 00:12:21.721 "message": "Invalid cntlid range [1-65520]" 00:12:21.721 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:21.721 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10283 -i 6 -I 5 00:12:21.979 [2024-07-12 13:20:19.260721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10283: invalid cntlid range [6-5] 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:21.979 { 00:12:21.979 "nqn": "nqn.2016-06.io.spdk:cnode10283", 00:12:21.979 "min_cntlid": 6, 00:12:21.979 "max_cntlid": 5, 00:12:21.979 "method": "nvmf_create_subsystem", 00:12:21.979 "req_id": 1 00:12:21.979 } 00:12:21.979 Got JSON-RPC error response 00:12:21.979 response: 00:12:21.979 { 00:12:21.979 "code": -32602, 00:12:21.979 "message": "Invalid cntlid range [6-5]" 00:12:21.979 }' 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:21.979 { 00:12:21.979 "nqn": "nqn.2016-06.io.spdk:cnode10283", 00:12:21.979 "min_cntlid": 6, 00:12:21.979 "max_cntlid": 5, 00:12:21.979 "method": "nvmf_create_subsystem", 00:12:21.979 "req_id": 1 00:12:21.979 } 00:12:21.979 Got JSON-RPC error response 00:12:21.979 response: 00:12:21.979 { 00:12:21.979 "code": -32602, 00:12:21.979 "message": "Invalid cntlid range [6-5]" 00:12:21.979 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:21.979 { 00:12:21.979 "name": "foobar", 00:12:21.979 "method": "nvmf_delete_target", 00:12:21.979 "req_id": 1 00:12:21.979 } 00:12:21.979 Got JSON-RPC error response 00:12:21.979 response: 00:12:21.979 { 00:12:21.979 "code": -32602, 00:12:21.979 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:21.979 }' 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:21.979 { 00:12:21.979 "name": "foobar", 00:12:21.979 "method": "nvmf_delete_target", 00:12:21.979 "req_id": 1 00:12:21.979 } 00:12:21.979 Got JSON-RPC error response 00:12:21.979 response: 00:12:21.979 { 00:12:21.979 "code": -32602, 00:12:21.979 "message": "The specified target doesn't exist, cannot delete it." 00:12:21.979 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.979 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.979 rmmod nvme_tcp 00:12:21.979 rmmod nvme_fabrics 00:12:21.979 rmmod nvme_keyring 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3510128 ']' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3510128 ']' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3510128' 00:12:22.237 killing process with pid 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3510128 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.237 13:20:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.778 13:20:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.778 00:12:24.778 real 0m8.619s 00:12:24.778 user 0m19.816s 00:12:24.778 sys 0m2.500s 00:12:24.778 13:20:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.778 13:20:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.778 ************************************ 00:12:24.778 END TEST nvmf_invalid 00:12:24.778 ************************************ 00:12:24.778 13:20:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.778 13:20:21 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:24.778 13:20:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.778 13:20:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.778 13:20:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.778 ************************************ 00:12:24.778 START TEST nvmf_abort 00:12:24.778 ************************************ 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:24.778 * Looking for test storage... 00:12:24.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.778 13:20:21 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.779 13:20:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:26.684 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:26.684 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:26.684 Found net devices under 0000:09:00.0: cvl_0_0 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.684 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:26.685 Found net devices under 0000:09:00.1: cvl_0_1 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.685 13:20:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:12:26.685 00:12:26.685 --- 10.0.0.2 ping statistics --- 00:12:26.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.685 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:26.685 00:12:26.685 --- 10.0.0.1 ping statistics --- 00:12:26.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.685 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3512767 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3512767 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3512767 ']' 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.685 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.943 [2024-07-12 13:20:24.174643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:12:26.943 [2024-07-12 13:20:24.174733] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.943 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.943 [2024-07-12 13:20:24.217142] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:26.943 [2024-07-12 13:20:24.242723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.943 [2024-07-12 13:20:24.330902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.943 [2024-07-12 13:20:24.330954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.943 [2024-07-12 13:20:24.330981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.943 [2024-07-12 13:20:24.330992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.943 [2024-07-12 13:20:24.331002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.943 [2024-07-12 13:20:24.331056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.943 [2024-07-12 13:20:24.331112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.943 [2024-07-12 13:20:24.331115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 [2024-07-12 13:20:24.476057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 Malloc0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 Delay0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 [2024-07-12 13:20:24.545359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.201 13:20:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:27.201 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.201 [2024-07-12 13:20:24.651487] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:29.726 Initializing NVMe Controllers 00:12:29.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:29.726 controller IO queue size 128 less than required 00:12:29.726 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:29.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:29.726 Initialization complete. Launching workers. 00:12:29.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33254 00:12:29.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33315, failed to submit 62 00:12:29.726 success 33258, unsuccess 57, failed 0 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.726 rmmod nvme_tcp 00:12:29.726 rmmod nvme_fabrics 00:12:29.726 rmmod nvme_keyring 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3512767 ']' 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3512767 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3512767 ']' 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3512767 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3512767 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3512767' 00:12:29.726 killing process with pid 3512767 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3512767 00:12:29.726 13:20:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3512767 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.726 13:20:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.631 13:20:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.631 00:12:31.631 real 0m7.280s 00:12:31.631 user 0m10.279s 00:12:31.631 sys 0m2.630s 00:12:31.631 13:20:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.631 13:20:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:31.631 ************************************ 00:12:31.631 END TEST nvmf_abort 00:12:31.631 ************************************ 00:12:31.890 13:20:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.890 13:20:29 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.890 13:20:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.890 13:20:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.890 13:20:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.890 ************************************ 00:12:31.890 START TEST nvmf_ns_hotplug_stress 00:12:31.890 ************************************ 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.890 * Looking for test storage... 00:12:31.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.890 13:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:33.806 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:33.806 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:33.806 Found net devices under 0000:09:00.0: cvl_0_0 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:33.806 Found net devices under 0000:09:00.1: cvl_0_1 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.806 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.807 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:12:34.065 00:12:34.065 --- 10.0.0.2 ping statistics --- 00:12:34.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.065 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:12:34.065 00:12:34.065 --- 10.0.0.1 ping statistics --- 00:12:34.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.065 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3514986 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3514986 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3514986 ']' 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.065 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.065 [2024-07-12 13:20:31.477793] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:12:34.065 [2024-07-12 13:20:31.477891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.065 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.065 [2024-07-12 13:20:31.517877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:34.323 [2024-07-12 13:20:31.545199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.323 [2024-07-12 13:20:31.635994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.323 [2024-07-12 13:20:31.636042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.323 [2024-07-12 13:20:31.636072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.323 [2024-07-12 13:20:31.636083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.323 [2024-07-12 13:20:31.636093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.323 [2024-07-12 13:20:31.636181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.323 [2024-07-12 13:20:31.636243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.323 [2024-07-12 13:20:31.636246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:34.323 13:20:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:34.580 [2024-07-12 13:20:32.025399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.580 13:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:34.837 13:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.095 [2024-07-12 13:20:32.508033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.095 13:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.352 13:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:35.916 Malloc0 00:12:35.916 13:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:35.916 Delay0 00:12:35.916 13:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.173 13:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:36.430 NULL1 00:12:36.430 13:20:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:36.994 13:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3515406 00:12:36.994 13:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:36.994 13:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:36.994 13:20:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.994 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.927 Read completed with error (sct=0, sc=11) 00:12:37.927 13:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.185 13:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:38.185 13:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:38.442 true 00:12:38.442 13:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:38.442 13:20:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.375 13:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.633 13:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:39.633 13:20:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:39.890 true 00:12:39.890 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:39.890 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.148 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.405 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:40.405 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:40.663 true 00:12:40.663 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:40.663 13:20:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.920 13:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.178 13:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:41.178 13:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:41.435 true 00:12:41.435 13:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:41.435 13:20:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.368 13:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.626 13:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:42.626 13:20:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:42.883 true 00:12:42.883 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:42.883 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.140 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.398 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:43.398 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:43.655 true 00:12:43.655 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:43.655 13:20:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.604 13:20:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:44.604 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:44.604 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:44.861 true 00:12:44.861 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:44.861 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.118 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.376 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:45.376 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:45.643 true 00:12:45.643 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:45.643 13:20:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.579 13:20:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.838 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:46.838 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:47.095 true 00:12:47.095 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:47.095 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.352 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.610 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:47.610 13:20:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:47.867 true 00:12:47.867 13:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:47.867 13:20:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.798 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.054 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:49.055 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:49.312 true 00:12:49.312 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:49.312 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.598 13:20:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.854 13:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:49.854 13:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:50.111 true 00:12:50.111 13:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:50.111 13:20:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.044 13:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.044 13:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:51.044 13:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:51.302 true 00:12:51.302 13:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:51.302 13:20:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.560 13:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.815 13:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:51.815 13:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:52.072 true 00:12:52.072 13:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:52.072 13:20:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.004 13:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.261 13:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:53.262 13:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:53.518 true 00:12:53.518 13:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:53.518 13:20:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.776 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.033 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:54.033 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:54.290 true 00:12:54.290 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:54.290 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.547 13:20:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.111 13:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:55.111 13:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:55.111 true 00:12:55.111 13:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:55.111 13:20:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.481 13:20:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.481 13:20:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:56.481 13:20:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:56.739 true 00:12:56.739 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:56.739 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.995 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.252 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:57.252 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:57.510 true 00:12:57.510 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:57.510 13:20:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.442 13:20:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.699 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:58.699 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:58.957 true 00:12:58.957 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:58.957 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.215 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.473 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:59.473 13:20:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:59.730 true 00:12:59.730 13:20:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:12:59.730 13:20:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.662 13:20:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.920 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:00.920 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:01.177 true 00:13:01.177 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:01.177 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.435 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.435 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:01.435 13:20:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:01.692 true 00:13:01.692 13:20:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:01.692 13:20:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.624 13:20:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.881 13:21:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:02.881 13:21:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:03.138 true 00:13:03.138 13:21:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:03.138 13:21:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.396 13:21:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.653 13:21:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:03.653 13:21:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:03.911 true 00:13:03.911 13:21:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:03.911 13:21:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.881 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.881 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:04.881 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:05.138 true 00:13:05.138 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:05.138 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.394 13:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.651 13:21:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:05.652 13:21:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:05.908 true 00:13:05.908 13:21:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:05.908 13:21:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.838 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.095 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:07.095 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:07.095 Initializing NVMe Controllers 00:13:07.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.095 Controller IO queue size 128, less than required. 00:13:07.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:07.095 Controller IO queue size 128, less than required. 00:13:07.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:07.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:07.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:07.095 Initialization complete. Launching workers. 00:13:07.095 ======================================================== 00:13:07.095 Latency(us) 00:13:07.095 Device Information : IOPS MiB/s Average min max 00:13:07.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 940.81 0.46 71752.46 2250.24 1012140.74 00:13:07.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8783.98 4.29 14571.83 5615.85 456672.59 00:13:07.095 ======================================================== 00:13:07.095 Total : 9724.79 4.75 20103.66 2250.24 1012140.74 00:13:07.095 00:13:07.352 true 00:13:07.352 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3515406 00:13:07.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3515406) - No such process 00:13:07.352 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3515406 00:13:07.352 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.609 13:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.867 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:07.867 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:07.867 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:07.867 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.867 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:08.124 null0 00:13:08.125 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.125 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.125 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:08.382 null1 00:13:08.382 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.382 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.382 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:08.640 null2 00:13:08.640 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.640 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.640 13:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:08.898 null3 00:13:08.898 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:08.898 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:08.898 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:09.156 null4 00:13:09.156 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:09.156 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:09.156 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:09.414 null5 00:13:09.414 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:09.414 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:09.414 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:09.672 null6 00:13:09.672 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:09.672 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:09.672 13:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:09.930 null7 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.930 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3519431 3519432 3519434 3519436 3519438 3519440 3519442 3519444 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.931 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.189 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.447 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.447 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.447 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.448 13:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.706 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.707 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.707 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.964 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.223 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.481 13:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.740 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.999 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.257 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.515 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.516 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.516 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.516 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 13:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.773 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.030 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.288 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.288 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.288 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.288 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.545 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.545 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.545 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.545 13:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.802 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.060 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.318 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.576 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.577 13:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.835 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.093 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.352 rmmod nvme_tcp 00:13:15.352 rmmod nvme_fabrics 00:13:15.352 rmmod nvme_keyring 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3514986 ']' 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3514986 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3514986 ']' 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3514986 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3514986 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3514986' 00:13:15.352 killing process with pid 3514986 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3514986 00:13:15.352 13:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3514986 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.610 13:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.147 13:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.147 00:13:18.147 real 0m45.952s 00:13:18.147 user 3m23.878s 00:13:18.147 sys 0m18.567s 00:13:18.147 13:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.147 13:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.147 ************************************ 00:13:18.147 END TEST nvmf_ns_hotplug_stress 00:13:18.147 ************************************ 00:13:18.147 13:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.147 13:21:15 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.147 13:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.147 13:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.147 13:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.147 ************************************ 00:13:18.147 START TEST nvmf_connect_stress 00:13:18.147 ************************************ 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.147 * Looking for test storage... 00:13:18.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.147 13:21:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:20.084 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.084 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:20.085 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:20.085 Found net devices under 0000:09:00.0: cvl_0_0 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:20.085 Found net devices under 0000:09:00.1: cvl_0_1 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:13:20.085 00:13:20.085 --- 10.0.0.2 ping statistics --- 00:13:20.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.085 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:13:20.085 00:13:20.085 --- 10.0.0.1 ping statistics --- 00:13:20.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.085 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3522702 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3522702 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3522702 ']' 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.085 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.085 [2024-07-12 13:21:17.534718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:13:20.085 [2024-07-12 13:21:17.534792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.343 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.343 [2024-07-12 13:21:17.573575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.343 [2024-07-12 13:21:17.599913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.343 [2024-07-12 13:21:17.689889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.343 [2024-07-12 13:21:17.689941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.343 [2024-07-12 13:21:17.689972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.343 [2024-07-12 13:21:17.689984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.343 [2024-07-12 13:21:17.689993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.343 [2024-07-12 13:21:17.690136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.343 [2024-07-12 13:21:17.690213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.343 [2024-07-12 13:21:17.690216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.343 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.343 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:20.343 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.343 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:20.343 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 [2024-07-12 13:21:17.823560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 [2024-07-12 13:21:17.859461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.601 NULL1 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3522848 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.601 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.602 13:21:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.860 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.860 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:20.860 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.860 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.860 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.117 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.117 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:21.117 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.117 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.117 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.682 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.682 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:21.682 13:21:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.682 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.682 13:21:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.939 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.939 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:21.939 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.939 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.939 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.197 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.197 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:22.197 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.197 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.197 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.454 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.454 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:22.454 13:21:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.454 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.454 13:21:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.712 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.712 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:22.712 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.712 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.712 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.278 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.278 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:23.278 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.278 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.278 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.535 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.535 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:23.535 13:21:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.535 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.535 13:21:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.793 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.793 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:23.793 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.793 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.793 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.050 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.050 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:24.050 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.050 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.050 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.308 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.308 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:24.308 13:21:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.308 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.308 13:21:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.872 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.872 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:24.872 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.872 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.872 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.140 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.140 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:25.140 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.140 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.140 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.403 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.403 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:25.403 13:21:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.403 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.403 13:21:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.659 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.659 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:25.659 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.659 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.659 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.915 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.915 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:25.915 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.915 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.915 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:26.479 13:21:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.479 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 13:21:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.737 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.737 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:26.737 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.737 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.737 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.996 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.996 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:26.996 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.996 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.996 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.253 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.253 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:27.253 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.253 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.253 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.817 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:27.817 13:21:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.817 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.817 13:21:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.074 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.074 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:28.074 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.074 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.074 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.331 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.331 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:28.331 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.331 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.331 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.588 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.589 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:28.589 13:21:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.589 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.589 13:21:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.846 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.846 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:28.846 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.846 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.846 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.411 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.411 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:29.411 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.411 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.411 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.668 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.668 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:29.668 13:21:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.668 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.668 13:21:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.926 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.926 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:29.926 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.926 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.926 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.183 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.183 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:30.183 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.183 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.183 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.441 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.441 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:30.441 13:21:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.441 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.441 13:21:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.698 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3522848 00:13:30.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3522848) - No such process 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3522848 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.956 rmmod nvme_tcp 00:13:30.956 rmmod nvme_fabrics 00:13:30.956 rmmod nvme_keyring 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3522702 ']' 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3522702 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3522702 ']' 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3522702 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3522702 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3522702' 00:13:30.956 killing process with pid 3522702 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3522702 00:13:30.956 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3522702 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.215 13:21:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.117 13:21:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.117 00:13:33.117 real 0m15.434s 00:13:33.117 user 0m38.230s 00:13:33.117 sys 0m6.169s 00:13:33.117 13:21:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.117 13:21:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.117 ************************************ 00:13:33.117 END TEST nvmf_connect_stress 00:13:33.117 ************************************ 00:13:33.117 13:21:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:33.117 13:21:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:33.117 13:21:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.117 13:21:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.117 13:21:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.376 ************************************ 00:13:33.376 START TEST nvmf_fused_ordering 00:13:33.376 ************************************ 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:33.376 * Looking for test storage... 00:13:33.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.376 13:21:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:35.331 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:35.331 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:35.331 Found net devices under 0000:09:00.0: cvl_0_0 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:35.331 Found net devices under 0000:09:00.1: cvl_0_1 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:35.331 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:35.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:13:35.589 00:13:35.589 --- 10.0.0.2 ping statistics --- 00:13:35.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.589 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:35.589 00:13:35.589 --- 10.0.0.1 ping statistics --- 00:13:35.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.589 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3526015 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3526015 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3526015 ']' 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.589 13:21:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.589 [2024-07-12 13:21:33.003165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:13:35.590 [2024-07-12 13:21:33.003252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.590 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.590 [2024-07-12 13:21:33.039996] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:35.847 [2024-07-12 13:21:33.067526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.847 [2024-07-12 13:21:33.150382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.847 [2024-07-12 13:21:33.150451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.847 [2024-07-12 13:21:33.150472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.847 [2024-07-12 13:21:33.150483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.847 [2024-07-12 13:21:33.150493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.847 [2024-07-12 13:21:33.150519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 [2024-07-12 13:21:33.272680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 [2024-07-12 13:21:33.288820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 NULL1 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.847 13:21:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:36.105 [2024-07-12 13:21:33.332476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:13:36.106 [2024-07-12 13:21:33.332515] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526036 ] 00:13:36.106 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.106 [2024-07-12 13:21:33.363533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.363 Attached to nqn.2016-06.io.spdk:cnode1 00:13:36.363 Namespace ID: 1 size: 1GB 00:13:36.363 fused_ordering(0) 00:13:36.363 fused_ordering(1) 00:13:36.363 fused_ordering(2) 00:13:36.363 fused_ordering(3) 00:13:36.363 fused_ordering(4) 00:13:36.363 fused_ordering(5) 00:13:36.363 fused_ordering(6) 00:13:36.363 fused_ordering(7) 00:13:36.363 fused_ordering(8) 00:13:36.363 fused_ordering(9) 00:13:36.363 fused_ordering(10) 00:13:36.363 fused_ordering(11) 00:13:36.363 fused_ordering(12) 00:13:36.363 fused_ordering(13) 00:13:36.363 fused_ordering(14) 00:13:36.363 fused_ordering(15) 00:13:36.363 fused_ordering(16) 00:13:36.363 fused_ordering(17) 00:13:36.363 fused_ordering(18) 00:13:36.363 fused_ordering(19) 00:13:36.363 fused_ordering(20) 00:13:36.363 fused_ordering(21) 00:13:36.363 fused_ordering(22) 00:13:36.363 fused_ordering(23) 00:13:36.363 fused_ordering(24) 00:13:36.363 fused_ordering(25) 00:13:36.363 fused_ordering(26) 00:13:36.363 fused_ordering(27) 00:13:36.363 fused_ordering(28) 00:13:36.363 fused_ordering(29) 00:13:36.363 fused_ordering(30) 00:13:36.363 fused_ordering(31) 00:13:36.363 fused_ordering(32) 00:13:36.363 fused_ordering(33) 00:13:36.363 fused_ordering(34) 00:13:36.363 fused_ordering(35) 00:13:36.363 fused_ordering(36) 00:13:36.363 fused_ordering(37) 00:13:36.363 fused_ordering(38) 00:13:36.363 fused_ordering(39) 00:13:36.363 fused_ordering(40) 00:13:36.363 fused_ordering(41) 00:13:36.363 fused_ordering(42) 00:13:36.363 fused_ordering(43) 00:13:36.363 fused_ordering(44) 00:13:36.363 fused_ordering(45) 00:13:36.363 fused_ordering(46) 00:13:36.363 fused_ordering(47) 00:13:36.363 fused_ordering(48) 00:13:36.363 fused_ordering(49) 00:13:36.363 fused_ordering(50) 00:13:36.363 fused_ordering(51) 00:13:36.363 fused_ordering(52) 00:13:36.363 fused_ordering(53) 00:13:36.363 fused_ordering(54) 00:13:36.363 fused_ordering(55) 00:13:36.363 fused_ordering(56) 00:13:36.363 fused_ordering(57) 00:13:36.363 fused_ordering(58) 00:13:36.363 fused_ordering(59) 00:13:36.363 fused_ordering(60) 00:13:36.363 fused_ordering(61) 00:13:36.363 fused_ordering(62) 00:13:36.363 fused_ordering(63) 00:13:36.363 fused_ordering(64) 00:13:36.363 fused_ordering(65) 00:13:36.363 fused_ordering(66) 00:13:36.363 fused_ordering(67) 00:13:36.363 fused_ordering(68) 00:13:36.363 fused_ordering(69) 00:13:36.363 fused_ordering(70) 00:13:36.363 fused_ordering(71) 00:13:36.363 fused_ordering(72) 00:13:36.363 fused_ordering(73) 00:13:36.363 fused_ordering(74) 00:13:36.363 fused_ordering(75) 00:13:36.363 fused_ordering(76) 00:13:36.363 fused_ordering(77) 00:13:36.363 fused_ordering(78) 00:13:36.363 fused_ordering(79) 00:13:36.363 fused_ordering(80) 00:13:36.363 fused_ordering(81) 00:13:36.363 fused_ordering(82) 00:13:36.363 fused_ordering(83) 00:13:36.363 fused_ordering(84) 00:13:36.363 fused_ordering(85) 00:13:36.363 fused_ordering(86) 00:13:36.363 fused_ordering(87) 00:13:36.363 fused_ordering(88) 00:13:36.363 fused_ordering(89) 00:13:36.363 fused_ordering(90) 00:13:36.363 fused_ordering(91) 00:13:36.363 fused_ordering(92) 00:13:36.363 fused_ordering(93) 00:13:36.363 fused_ordering(94) 00:13:36.363 fused_ordering(95) 00:13:36.363 fused_ordering(96) 00:13:36.363 fused_ordering(97) 00:13:36.363 fused_ordering(98) 00:13:36.363 fused_ordering(99) 00:13:36.363 fused_ordering(100) 00:13:36.363 fused_ordering(101) 00:13:36.363 fused_ordering(102) 00:13:36.363 fused_ordering(103) 00:13:36.363 fused_ordering(104) 00:13:36.363 fused_ordering(105) 00:13:36.363 fused_ordering(106) 00:13:36.363 fused_ordering(107) 00:13:36.363 fused_ordering(108) 00:13:36.363 fused_ordering(109) 00:13:36.363 fused_ordering(110) 00:13:36.363 fused_ordering(111) 00:13:36.363 fused_ordering(112) 00:13:36.363 fused_ordering(113) 00:13:36.363 fused_ordering(114) 00:13:36.363 fused_ordering(115) 00:13:36.363 fused_ordering(116) 00:13:36.363 fused_ordering(117) 00:13:36.363 fused_ordering(118) 00:13:36.363 fused_ordering(119) 00:13:36.363 fused_ordering(120) 00:13:36.363 fused_ordering(121) 00:13:36.363 fused_ordering(122) 00:13:36.363 fused_ordering(123) 00:13:36.363 fused_ordering(124) 00:13:36.363 fused_ordering(125) 00:13:36.363 fused_ordering(126) 00:13:36.363 fused_ordering(127) 00:13:36.363 fused_ordering(128) 00:13:36.363 fused_ordering(129) 00:13:36.363 fused_ordering(130) 00:13:36.363 fused_ordering(131) 00:13:36.363 fused_ordering(132) 00:13:36.363 fused_ordering(133) 00:13:36.363 fused_ordering(134) 00:13:36.363 fused_ordering(135) 00:13:36.364 fused_ordering(136) 00:13:36.364 fused_ordering(137) 00:13:36.364 fused_ordering(138) 00:13:36.364 fused_ordering(139) 00:13:36.364 fused_ordering(140) 00:13:36.364 fused_ordering(141) 00:13:36.364 fused_ordering(142) 00:13:36.364 fused_ordering(143) 00:13:36.364 fused_ordering(144) 00:13:36.364 fused_ordering(145) 00:13:36.364 fused_ordering(146) 00:13:36.364 fused_ordering(147) 00:13:36.364 fused_ordering(148) 00:13:36.364 fused_ordering(149) 00:13:36.364 fused_ordering(150) 00:13:36.364 fused_ordering(151) 00:13:36.364 fused_ordering(152) 00:13:36.364 fused_ordering(153) 00:13:36.364 fused_ordering(154) 00:13:36.364 fused_ordering(155) 00:13:36.364 fused_ordering(156) 00:13:36.364 fused_ordering(157) 00:13:36.364 fused_ordering(158) 00:13:36.364 fused_ordering(159) 00:13:36.364 fused_ordering(160) 00:13:36.364 fused_ordering(161) 00:13:36.364 fused_ordering(162) 00:13:36.364 fused_ordering(163) 00:13:36.364 fused_ordering(164) 00:13:36.364 fused_ordering(165) 00:13:36.364 fused_ordering(166) 00:13:36.364 fused_ordering(167) 00:13:36.364 fused_ordering(168) 00:13:36.364 fused_ordering(169) 00:13:36.364 fused_ordering(170) 00:13:36.364 fused_ordering(171) 00:13:36.364 fused_ordering(172) 00:13:36.364 fused_ordering(173) 00:13:36.364 fused_ordering(174) 00:13:36.364 fused_ordering(175) 00:13:36.364 fused_ordering(176) 00:13:36.364 fused_ordering(177) 00:13:36.364 fused_ordering(178) 00:13:36.364 fused_ordering(179) 00:13:36.364 fused_ordering(180) 00:13:36.364 fused_ordering(181) 00:13:36.364 fused_ordering(182) 00:13:36.364 fused_ordering(183) 00:13:36.364 fused_ordering(184) 00:13:36.364 fused_ordering(185) 00:13:36.364 fused_ordering(186) 00:13:36.364 fused_ordering(187) 00:13:36.364 fused_ordering(188) 00:13:36.364 fused_ordering(189) 00:13:36.364 fused_ordering(190) 00:13:36.364 fused_ordering(191) 00:13:36.364 fused_ordering(192) 00:13:36.364 fused_ordering(193) 00:13:36.364 fused_ordering(194) 00:13:36.364 fused_ordering(195) 00:13:36.364 fused_ordering(196) 00:13:36.364 fused_ordering(197) 00:13:36.364 fused_ordering(198) 00:13:36.364 fused_ordering(199) 00:13:36.364 fused_ordering(200) 00:13:36.364 fused_ordering(201) 00:13:36.364 fused_ordering(202) 00:13:36.364 fused_ordering(203) 00:13:36.364 fused_ordering(204) 00:13:36.364 fused_ordering(205) 00:13:36.929 fused_ordering(206) 00:13:36.929 fused_ordering(207) 00:13:36.929 fused_ordering(208) 00:13:36.929 fused_ordering(209) 00:13:36.929 fused_ordering(210) 00:13:36.929 fused_ordering(211) 00:13:36.929 fused_ordering(212) 00:13:36.929 fused_ordering(213) 00:13:36.929 fused_ordering(214) 00:13:36.929 fused_ordering(215) 00:13:36.929 fused_ordering(216) 00:13:36.929 fused_ordering(217) 00:13:36.929 fused_ordering(218) 00:13:36.929 fused_ordering(219) 00:13:36.929 fused_ordering(220) 00:13:36.929 fused_ordering(221) 00:13:36.929 fused_ordering(222) 00:13:36.929 fused_ordering(223) 00:13:36.929 fused_ordering(224) 00:13:36.929 fused_ordering(225) 00:13:36.929 fused_ordering(226) 00:13:36.929 fused_ordering(227) 00:13:36.929 fused_ordering(228) 00:13:36.929 fused_ordering(229) 00:13:36.929 fused_ordering(230) 00:13:36.929 fused_ordering(231) 00:13:36.929 fused_ordering(232) 00:13:36.929 fused_ordering(233) 00:13:36.929 fused_ordering(234) 00:13:36.929 fused_ordering(235) 00:13:36.929 fused_ordering(236) 00:13:36.929 fused_ordering(237) 00:13:36.929 fused_ordering(238) 00:13:36.929 fused_ordering(239) 00:13:36.929 fused_ordering(240) 00:13:36.929 fused_ordering(241) 00:13:36.929 fused_ordering(242) 00:13:36.929 fused_ordering(243) 00:13:36.929 fused_ordering(244) 00:13:36.929 fused_ordering(245) 00:13:36.929 fused_ordering(246) 00:13:36.929 fused_ordering(247) 00:13:36.929 fused_ordering(248) 00:13:36.929 fused_ordering(249) 00:13:36.929 fused_ordering(250) 00:13:36.929 fused_ordering(251) 00:13:36.929 fused_ordering(252) 00:13:36.929 fused_ordering(253) 00:13:36.929 fused_ordering(254) 00:13:36.929 fused_ordering(255) 00:13:36.929 fused_ordering(256) 00:13:36.929 fused_ordering(257) 00:13:36.929 fused_ordering(258) 00:13:36.929 fused_ordering(259) 00:13:36.929 fused_ordering(260) 00:13:36.929 fused_ordering(261) 00:13:36.929 fused_ordering(262) 00:13:36.929 fused_ordering(263) 00:13:36.929 fused_ordering(264) 00:13:36.929 fused_ordering(265) 00:13:36.929 fused_ordering(266) 00:13:36.929 fused_ordering(267) 00:13:36.929 fused_ordering(268) 00:13:36.929 fused_ordering(269) 00:13:36.929 fused_ordering(270) 00:13:36.929 fused_ordering(271) 00:13:36.929 fused_ordering(272) 00:13:36.929 fused_ordering(273) 00:13:36.929 fused_ordering(274) 00:13:36.929 fused_ordering(275) 00:13:36.929 fused_ordering(276) 00:13:36.929 fused_ordering(277) 00:13:36.929 fused_ordering(278) 00:13:36.929 fused_ordering(279) 00:13:36.929 fused_ordering(280) 00:13:36.929 fused_ordering(281) 00:13:36.929 fused_ordering(282) 00:13:36.929 fused_ordering(283) 00:13:36.929 fused_ordering(284) 00:13:36.929 fused_ordering(285) 00:13:36.929 fused_ordering(286) 00:13:36.929 fused_ordering(287) 00:13:36.929 fused_ordering(288) 00:13:36.929 fused_ordering(289) 00:13:36.929 fused_ordering(290) 00:13:36.929 fused_ordering(291) 00:13:36.929 fused_ordering(292) 00:13:36.929 fused_ordering(293) 00:13:36.929 fused_ordering(294) 00:13:36.929 fused_ordering(295) 00:13:36.929 fused_ordering(296) 00:13:36.929 fused_ordering(297) 00:13:36.929 fused_ordering(298) 00:13:36.929 fused_ordering(299) 00:13:36.929 fused_ordering(300) 00:13:36.929 fused_ordering(301) 00:13:36.929 fused_ordering(302) 00:13:36.929 fused_ordering(303) 00:13:36.929 fused_ordering(304) 00:13:36.929 fused_ordering(305) 00:13:36.929 fused_ordering(306) 00:13:36.929 fused_ordering(307) 00:13:36.929 fused_ordering(308) 00:13:36.929 fused_ordering(309) 00:13:36.929 fused_ordering(310) 00:13:36.929 fused_ordering(311) 00:13:36.929 fused_ordering(312) 00:13:36.929 fused_ordering(313) 00:13:36.929 fused_ordering(314) 00:13:36.929 fused_ordering(315) 00:13:36.929 fused_ordering(316) 00:13:36.929 fused_ordering(317) 00:13:36.929 fused_ordering(318) 00:13:36.929 fused_ordering(319) 00:13:36.929 fused_ordering(320) 00:13:36.929 fused_ordering(321) 00:13:36.929 fused_ordering(322) 00:13:36.929 fused_ordering(323) 00:13:36.929 fused_ordering(324) 00:13:36.929 fused_ordering(325) 00:13:36.929 fused_ordering(326) 00:13:36.929 fused_ordering(327) 00:13:36.929 fused_ordering(328) 00:13:36.929 fused_ordering(329) 00:13:36.929 fused_ordering(330) 00:13:36.929 fused_ordering(331) 00:13:36.929 fused_ordering(332) 00:13:36.929 fused_ordering(333) 00:13:36.929 fused_ordering(334) 00:13:36.929 fused_ordering(335) 00:13:36.929 fused_ordering(336) 00:13:36.929 fused_ordering(337) 00:13:36.929 fused_ordering(338) 00:13:36.929 fused_ordering(339) 00:13:36.929 fused_ordering(340) 00:13:36.929 fused_ordering(341) 00:13:36.929 fused_ordering(342) 00:13:36.929 fused_ordering(343) 00:13:36.929 fused_ordering(344) 00:13:36.929 fused_ordering(345) 00:13:36.929 fused_ordering(346) 00:13:36.929 fused_ordering(347) 00:13:36.929 fused_ordering(348) 00:13:36.929 fused_ordering(349) 00:13:36.929 fused_ordering(350) 00:13:36.929 fused_ordering(351) 00:13:36.929 fused_ordering(352) 00:13:36.929 fused_ordering(353) 00:13:36.929 fused_ordering(354) 00:13:36.929 fused_ordering(355) 00:13:36.929 fused_ordering(356) 00:13:36.929 fused_ordering(357) 00:13:36.929 fused_ordering(358) 00:13:36.929 fused_ordering(359) 00:13:36.929 fused_ordering(360) 00:13:36.929 fused_ordering(361) 00:13:36.929 fused_ordering(362) 00:13:36.929 fused_ordering(363) 00:13:36.929 fused_ordering(364) 00:13:36.929 fused_ordering(365) 00:13:36.929 fused_ordering(366) 00:13:36.929 fused_ordering(367) 00:13:36.929 fused_ordering(368) 00:13:36.929 fused_ordering(369) 00:13:36.929 fused_ordering(370) 00:13:36.929 fused_ordering(371) 00:13:36.929 fused_ordering(372) 00:13:36.929 fused_ordering(373) 00:13:36.929 fused_ordering(374) 00:13:36.929 fused_ordering(375) 00:13:36.929 fused_ordering(376) 00:13:36.929 fused_ordering(377) 00:13:36.929 fused_ordering(378) 00:13:36.929 fused_ordering(379) 00:13:36.929 fused_ordering(380) 00:13:36.929 fused_ordering(381) 00:13:36.929 fused_ordering(382) 00:13:36.929 fused_ordering(383) 00:13:36.929 fused_ordering(384) 00:13:36.929 fused_ordering(385) 00:13:36.929 fused_ordering(386) 00:13:36.929 fused_ordering(387) 00:13:36.929 fused_ordering(388) 00:13:36.929 fused_ordering(389) 00:13:36.929 fused_ordering(390) 00:13:36.929 fused_ordering(391) 00:13:36.929 fused_ordering(392) 00:13:36.929 fused_ordering(393) 00:13:36.929 fused_ordering(394) 00:13:36.929 fused_ordering(395) 00:13:36.929 fused_ordering(396) 00:13:36.929 fused_ordering(397) 00:13:36.929 fused_ordering(398) 00:13:36.929 fused_ordering(399) 00:13:36.929 fused_ordering(400) 00:13:36.929 fused_ordering(401) 00:13:36.929 fused_ordering(402) 00:13:36.929 fused_ordering(403) 00:13:36.929 fused_ordering(404) 00:13:36.929 fused_ordering(405) 00:13:36.929 fused_ordering(406) 00:13:36.929 fused_ordering(407) 00:13:36.929 fused_ordering(408) 00:13:36.929 fused_ordering(409) 00:13:36.929 fused_ordering(410) 00:13:37.494 fused_ordering(411) 00:13:37.494 fused_ordering(412) 00:13:37.494 fused_ordering(413) 00:13:37.494 fused_ordering(414) 00:13:37.494 fused_ordering(415) 00:13:37.494 fused_ordering(416) 00:13:37.494 fused_ordering(417) 00:13:37.494 fused_ordering(418) 00:13:37.494 fused_ordering(419) 00:13:37.494 fused_ordering(420) 00:13:37.494 fused_ordering(421) 00:13:37.494 fused_ordering(422) 00:13:37.494 fused_ordering(423) 00:13:37.494 fused_ordering(424) 00:13:37.494 fused_ordering(425) 00:13:37.494 fused_ordering(426) 00:13:37.494 fused_ordering(427) 00:13:37.494 fused_ordering(428) 00:13:37.494 fused_ordering(429) 00:13:37.494 fused_ordering(430) 00:13:37.494 fused_ordering(431) 00:13:37.494 fused_ordering(432) 00:13:37.494 fused_ordering(433) 00:13:37.494 fused_ordering(434) 00:13:37.494 fused_ordering(435) 00:13:37.494 fused_ordering(436) 00:13:37.494 fused_ordering(437) 00:13:37.494 fused_ordering(438) 00:13:37.494 fused_ordering(439) 00:13:37.494 fused_ordering(440) 00:13:37.494 fused_ordering(441) 00:13:37.494 fused_ordering(442) 00:13:37.494 fused_ordering(443) 00:13:37.494 fused_ordering(444) 00:13:37.494 fused_ordering(445) 00:13:37.494 fused_ordering(446) 00:13:37.494 fused_ordering(447) 00:13:37.494 fused_ordering(448) 00:13:37.494 fused_ordering(449) 00:13:37.494 fused_ordering(450) 00:13:37.494 fused_ordering(451) 00:13:37.494 fused_ordering(452) 00:13:37.494 fused_ordering(453) 00:13:37.494 fused_ordering(454) 00:13:37.494 fused_ordering(455) 00:13:37.494 fused_ordering(456) 00:13:37.494 fused_ordering(457) 00:13:37.494 fused_ordering(458) 00:13:37.494 fused_ordering(459) 00:13:37.494 fused_ordering(460) 00:13:37.494 fused_ordering(461) 00:13:37.494 fused_ordering(462) 00:13:37.494 fused_ordering(463) 00:13:37.494 fused_ordering(464) 00:13:37.494 fused_ordering(465) 00:13:37.494 fused_ordering(466) 00:13:37.494 fused_ordering(467) 00:13:37.494 fused_ordering(468) 00:13:37.494 fused_ordering(469) 00:13:37.494 fused_ordering(470) 00:13:37.494 fused_ordering(471) 00:13:37.494 fused_ordering(472) 00:13:37.494 fused_ordering(473) 00:13:37.494 fused_ordering(474) 00:13:37.494 fused_ordering(475) 00:13:37.494 fused_ordering(476) 00:13:37.495 fused_ordering(477) 00:13:37.495 fused_ordering(478) 00:13:37.495 fused_ordering(479) 00:13:37.495 fused_ordering(480) 00:13:37.495 fused_ordering(481) 00:13:37.495 fused_ordering(482) 00:13:37.495 fused_ordering(483) 00:13:37.495 fused_ordering(484) 00:13:37.495 fused_ordering(485) 00:13:37.495 fused_ordering(486) 00:13:37.495 fused_ordering(487) 00:13:37.495 fused_ordering(488) 00:13:37.495 fused_ordering(489) 00:13:37.495 fused_ordering(490) 00:13:37.495 fused_ordering(491) 00:13:37.495 fused_ordering(492) 00:13:37.495 fused_ordering(493) 00:13:37.495 fused_ordering(494) 00:13:37.495 fused_ordering(495) 00:13:37.495 fused_ordering(496) 00:13:37.495 fused_ordering(497) 00:13:37.495 fused_ordering(498) 00:13:37.495 fused_ordering(499) 00:13:37.495 fused_ordering(500) 00:13:37.495 fused_ordering(501) 00:13:37.495 fused_ordering(502) 00:13:37.495 fused_ordering(503) 00:13:37.495 fused_ordering(504) 00:13:37.495 fused_ordering(505) 00:13:37.495 fused_ordering(506) 00:13:37.495 fused_ordering(507) 00:13:37.495 fused_ordering(508) 00:13:37.495 fused_ordering(509) 00:13:37.495 fused_ordering(510) 00:13:37.495 fused_ordering(511) 00:13:37.495 fused_ordering(512) 00:13:37.495 fused_ordering(513) 00:13:37.495 fused_ordering(514) 00:13:37.495 fused_ordering(515) 00:13:37.495 fused_ordering(516) 00:13:37.495 fused_ordering(517) 00:13:37.495 fused_ordering(518) 00:13:37.495 fused_ordering(519) 00:13:37.495 fused_ordering(520) 00:13:37.495 fused_ordering(521) 00:13:37.495 fused_ordering(522) 00:13:37.495 fused_ordering(523) 00:13:37.495 fused_ordering(524) 00:13:37.495 fused_ordering(525) 00:13:37.495 fused_ordering(526) 00:13:37.495 fused_ordering(527) 00:13:37.495 fused_ordering(528) 00:13:37.495 fused_ordering(529) 00:13:37.495 fused_ordering(530) 00:13:37.495 fused_ordering(531) 00:13:37.495 fused_ordering(532) 00:13:37.495 fused_ordering(533) 00:13:37.495 fused_ordering(534) 00:13:37.495 fused_ordering(535) 00:13:37.495 fused_ordering(536) 00:13:37.495 fused_ordering(537) 00:13:37.495 fused_ordering(538) 00:13:37.495 fused_ordering(539) 00:13:37.495 fused_ordering(540) 00:13:37.495 fused_ordering(541) 00:13:37.495 fused_ordering(542) 00:13:37.495 fused_ordering(543) 00:13:37.495 fused_ordering(544) 00:13:37.495 fused_ordering(545) 00:13:37.495 fused_ordering(546) 00:13:37.495 fused_ordering(547) 00:13:37.495 fused_ordering(548) 00:13:37.495 fused_ordering(549) 00:13:37.495 fused_ordering(550) 00:13:37.495 fused_ordering(551) 00:13:37.495 fused_ordering(552) 00:13:37.495 fused_ordering(553) 00:13:37.495 fused_ordering(554) 00:13:37.495 fused_ordering(555) 00:13:37.495 fused_ordering(556) 00:13:37.495 fused_ordering(557) 00:13:37.495 fused_ordering(558) 00:13:37.495 fused_ordering(559) 00:13:37.495 fused_ordering(560) 00:13:37.495 fused_ordering(561) 00:13:37.495 fused_ordering(562) 00:13:37.495 fused_ordering(563) 00:13:37.495 fused_ordering(564) 00:13:37.495 fused_ordering(565) 00:13:37.495 fused_ordering(566) 00:13:37.495 fused_ordering(567) 00:13:37.495 fused_ordering(568) 00:13:37.495 fused_ordering(569) 00:13:37.495 fused_ordering(570) 00:13:37.495 fused_ordering(571) 00:13:37.495 fused_ordering(572) 00:13:37.495 fused_ordering(573) 00:13:37.495 fused_ordering(574) 00:13:37.495 fused_ordering(575) 00:13:37.495 fused_ordering(576) 00:13:37.495 fused_ordering(577) 00:13:37.495 fused_ordering(578) 00:13:37.495 fused_ordering(579) 00:13:37.495 fused_ordering(580) 00:13:37.495 fused_ordering(581) 00:13:37.495 fused_ordering(582) 00:13:37.495 fused_ordering(583) 00:13:37.495 fused_ordering(584) 00:13:37.495 fused_ordering(585) 00:13:37.495 fused_ordering(586) 00:13:37.495 fused_ordering(587) 00:13:37.495 fused_ordering(588) 00:13:37.495 fused_ordering(589) 00:13:37.495 fused_ordering(590) 00:13:37.495 fused_ordering(591) 00:13:37.495 fused_ordering(592) 00:13:37.495 fused_ordering(593) 00:13:37.495 fused_ordering(594) 00:13:37.495 fused_ordering(595) 00:13:37.495 fused_ordering(596) 00:13:37.495 fused_ordering(597) 00:13:37.495 fused_ordering(598) 00:13:37.495 fused_ordering(599) 00:13:37.495 fused_ordering(600) 00:13:37.495 fused_ordering(601) 00:13:37.495 fused_ordering(602) 00:13:37.495 fused_ordering(603) 00:13:37.495 fused_ordering(604) 00:13:37.495 fused_ordering(605) 00:13:37.495 fused_ordering(606) 00:13:37.495 fused_ordering(607) 00:13:37.495 fused_ordering(608) 00:13:37.495 fused_ordering(609) 00:13:37.495 fused_ordering(610) 00:13:37.495 fused_ordering(611) 00:13:37.495 fused_ordering(612) 00:13:37.495 fused_ordering(613) 00:13:37.495 fused_ordering(614) 00:13:37.495 fused_ordering(615) 00:13:38.060 fused_ordering(616) 00:13:38.060 fused_ordering(617) 00:13:38.060 fused_ordering(618) 00:13:38.060 fused_ordering(619) 00:13:38.060 fused_ordering(620) 00:13:38.060 fused_ordering(621) 00:13:38.060 fused_ordering(622) 00:13:38.060 fused_ordering(623) 00:13:38.060 fused_ordering(624) 00:13:38.060 fused_ordering(625) 00:13:38.060 fused_ordering(626) 00:13:38.060 fused_ordering(627) 00:13:38.060 fused_ordering(628) 00:13:38.060 fused_ordering(629) 00:13:38.060 fused_ordering(630) 00:13:38.060 fused_ordering(631) 00:13:38.060 fused_ordering(632) 00:13:38.060 fused_ordering(633) 00:13:38.060 fused_ordering(634) 00:13:38.060 fused_ordering(635) 00:13:38.060 fused_ordering(636) 00:13:38.060 fused_ordering(637) 00:13:38.060 fused_ordering(638) 00:13:38.060 fused_ordering(639) 00:13:38.060 fused_ordering(640) 00:13:38.060 fused_ordering(641) 00:13:38.060 fused_ordering(642) 00:13:38.060 fused_ordering(643) 00:13:38.060 fused_ordering(644) 00:13:38.060 fused_ordering(645) 00:13:38.060 fused_ordering(646) 00:13:38.060 fused_ordering(647) 00:13:38.060 fused_ordering(648) 00:13:38.060 fused_ordering(649) 00:13:38.060 fused_ordering(650) 00:13:38.060 fused_ordering(651) 00:13:38.060 fused_ordering(652) 00:13:38.060 fused_ordering(653) 00:13:38.060 fused_ordering(654) 00:13:38.060 fused_ordering(655) 00:13:38.060 fused_ordering(656) 00:13:38.060 fused_ordering(657) 00:13:38.060 fused_ordering(658) 00:13:38.060 fused_ordering(659) 00:13:38.060 fused_ordering(660) 00:13:38.060 fused_ordering(661) 00:13:38.060 fused_ordering(662) 00:13:38.060 fused_ordering(663) 00:13:38.060 fused_ordering(664) 00:13:38.060 fused_ordering(665) 00:13:38.060 fused_ordering(666) 00:13:38.060 fused_ordering(667) 00:13:38.060 fused_ordering(668) 00:13:38.060 fused_ordering(669) 00:13:38.060 fused_ordering(670) 00:13:38.060 fused_ordering(671) 00:13:38.060 fused_ordering(672) 00:13:38.060 fused_ordering(673) 00:13:38.060 fused_ordering(674) 00:13:38.060 fused_ordering(675) 00:13:38.060 fused_ordering(676) 00:13:38.060 fused_ordering(677) 00:13:38.060 fused_ordering(678) 00:13:38.060 fused_ordering(679) 00:13:38.060 fused_ordering(680) 00:13:38.060 fused_ordering(681) 00:13:38.060 fused_ordering(682) 00:13:38.060 fused_ordering(683) 00:13:38.060 fused_ordering(684) 00:13:38.060 fused_ordering(685) 00:13:38.060 fused_ordering(686) 00:13:38.060 fused_ordering(687) 00:13:38.060 fused_ordering(688) 00:13:38.060 fused_ordering(689) 00:13:38.060 fused_ordering(690) 00:13:38.060 fused_ordering(691) 00:13:38.060 fused_ordering(692) 00:13:38.060 fused_ordering(693) 00:13:38.060 fused_ordering(694) 00:13:38.060 fused_ordering(695) 00:13:38.060 fused_ordering(696) 00:13:38.060 fused_ordering(697) 00:13:38.060 fused_ordering(698) 00:13:38.060 fused_ordering(699) 00:13:38.060 fused_ordering(700) 00:13:38.060 fused_ordering(701) 00:13:38.060 fused_ordering(702) 00:13:38.060 fused_ordering(703) 00:13:38.060 fused_ordering(704) 00:13:38.060 fused_ordering(705) 00:13:38.060 fused_ordering(706) 00:13:38.060 fused_ordering(707) 00:13:38.060 fused_ordering(708) 00:13:38.060 fused_ordering(709) 00:13:38.060 fused_ordering(710) 00:13:38.060 fused_ordering(711) 00:13:38.060 fused_ordering(712) 00:13:38.060 fused_ordering(713) 00:13:38.060 fused_ordering(714) 00:13:38.060 fused_ordering(715) 00:13:38.060 fused_ordering(716) 00:13:38.060 fused_ordering(717) 00:13:38.060 fused_ordering(718) 00:13:38.060 fused_ordering(719) 00:13:38.060 fused_ordering(720) 00:13:38.060 fused_ordering(721) 00:13:38.060 fused_ordering(722) 00:13:38.060 fused_ordering(723) 00:13:38.060 fused_ordering(724) 00:13:38.060 fused_ordering(725) 00:13:38.060 fused_ordering(726) 00:13:38.060 fused_ordering(727) 00:13:38.060 fused_ordering(728) 00:13:38.060 fused_ordering(729) 00:13:38.060 fused_ordering(730) 00:13:38.060 fused_ordering(731) 00:13:38.060 fused_ordering(732) 00:13:38.060 fused_ordering(733) 00:13:38.060 fused_ordering(734) 00:13:38.060 fused_ordering(735) 00:13:38.060 fused_ordering(736) 00:13:38.061 fused_ordering(737) 00:13:38.061 fused_ordering(738) 00:13:38.061 fused_ordering(739) 00:13:38.061 fused_ordering(740) 00:13:38.061 fused_ordering(741) 00:13:38.061 fused_ordering(742) 00:13:38.061 fused_ordering(743) 00:13:38.061 fused_ordering(744) 00:13:38.061 fused_ordering(745) 00:13:38.061 fused_ordering(746) 00:13:38.061 fused_ordering(747) 00:13:38.061 fused_ordering(748) 00:13:38.061 fused_ordering(749) 00:13:38.061 fused_ordering(750) 00:13:38.061 fused_ordering(751) 00:13:38.061 fused_ordering(752) 00:13:38.061 fused_ordering(753) 00:13:38.061 fused_ordering(754) 00:13:38.061 fused_ordering(755) 00:13:38.061 fused_ordering(756) 00:13:38.061 fused_ordering(757) 00:13:38.061 fused_ordering(758) 00:13:38.061 fused_ordering(759) 00:13:38.061 fused_ordering(760) 00:13:38.061 fused_ordering(761) 00:13:38.061 fused_ordering(762) 00:13:38.061 fused_ordering(763) 00:13:38.061 fused_ordering(764) 00:13:38.061 fused_ordering(765) 00:13:38.061 fused_ordering(766) 00:13:38.061 fused_ordering(767) 00:13:38.061 fused_ordering(768) 00:13:38.061 fused_ordering(769) 00:13:38.061 fused_ordering(770) 00:13:38.061 fused_ordering(771) 00:13:38.061 fused_ordering(772) 00:13:38.061 fused_ordering(773) 00:13:38.061 fused_ordering(774) 00:13:38.061 fused_ordering(775) 00:13:38.061 fused_ordering(776) 00:13:38.061 fused_ordering(777) 00:13:38.061 fused_ordering(778) 00:13:38.061 fused_ordering(779) 00:13:38.061 fused_ordering(780) 00:13:38.061 fused_ordering(781) 00:13:38.061 fused_ordering(782) 00:13:38.061 fused_ordering(783) 00:13:38.061 fused_ordering(784) 00:13:38.061 fused_ordering(785) 00:13:38.061 fused_ordering(786) 00:13:38.061 fused_ordering(787) 00:13:38.061 fused_ordering(788) 00:13:38.061 fused_ordering(789) 00:13:38.061 fused_ordering(790) 00:13:38.061 fused_ordering(791) 00:13:38.061 fused_ordering(792) 00:13:38.061 fused_ordering(793) 00:13:38.061 fused_ordering(794) 00:13:38.061 fused_ordering(795) 00:13:38.061 fused_ordering(796) 00:13:38.061 fused_ordering(797) 00:13:38.061 fused_ordering(798) 00:13:38.061 fused_ordering(799) 00:13:38.061 fused_ordering(800) 00:13:38.061 fused_ordering(801) 00:13:38.061 fused_ordering(802) 00:13:38.061 fused_ordering(803) 00:13:38.061 fused_ordering(804) 00:13:38.061 fused_ordering(805) 00:13:38.061 fused_ordering(806) 00:13:38.061 fused_ordering(807) 00:13:38.061 fused_ordering(808) 00:13:38.061 fused_ordering(809) 00:13:38.061 fused_ordering(810) 00:13:38.061 fused_ordering(811) 00:13:38.061 fused_ordering(812) 00:13:38.061 fused_ordering(813) 00:13:38.061 fused_ordering(814) 00:13:38.061 fused_ordering(815) 00:13:38.061 fused_ordering(816) 00:13:38.061 fused_ordering(817) 00:13:38.061 fused_ordering(818) 00:13:38.061 fused_ordering(819) 00:13:38.061 fused_ordering(820) 00:13:38.626 fused_ordering(821) 00:13:38.626 fused_ordering(822) 00:13:38.626 fused_ordering(823) 00:13:38.626 fused_ordering(824) 00:13:38.626 fused_ordering(825) 00:13:38.626 fused_ordering(826) 00:13:38.626 fused_ordering(827) 00:13:38.626 fused_ordering(828) 00:13:38.626 fused_ordering(829) 00:13:38.626 fused_ordering(830) 00:13:38.626 fused_ordering(831) 00:13:38.626 fused_ordering(832) 00:13:38.626 fused_ordering(833) 00:13:38.626 fused_ordering(834) 00:13:38.626 fused_ordering(835) 00:13:38.626 fused_ordering(836) 00:13:38.626 fused_ordering(837) 00:13:38.626 fused_ordering(838) 00:13:38.626 fused_ordering(839) 00:13:38.626 fused_ordering(840) 00:13:38.626 fused_ordering(841) 00:13:38.626 fused_ordering(842) 00:13:38.626 fused_ordering(843) 00:13:38.626 fused_ordering(844) 00:13:38.626 fused_ordering(845) 00:13:38.626 fused_ordering(846) 00:13:38.626 fused_ordering(847) 00:13:38.626 fused_ordering(848) 00:13:38.626 fused_ordering(849) 00:13:38.626 fused_ordering(850) 00:13:38.626 fused_ordering(851) 00:13:38.626 fused_ordering(852) 00:13:38.626 fused_ordering(853) 00:13:38.626 fused_ordering(854) 00:13:38.626 fused_ordering(855) 00:13:38.626 fused_ordering(856) 00:13:38.626 fused_ordering(857) 00:13:38.626 fused_ordering(858) 00:13:38.626 fused_ordering(859) 00:13:38.626 fused_ordering(860) 00:13:38.626 fused_ordering(861) 00:13:38.626 fused_ordering(862) 00:13:38.626 fused_ordering(863) 00:13:38.626 fused_ordering(864) 00:13:38.626 fused_ordering(865) 00:13:38.626 fused_ordering(866) 00:13:38.626 fused_ordering(867) 00:13:38.626 fused_ordering(868) 00:13:38.626 fused_ordering(869) 00:13:38.626 fused_ordering(870) 00:13:38.626 fused_ordering(871) 00:13:38.626 fused_ordering(872) 00:13:38.626 fused_ordering(873) 00:13:38.626 fused_ordering(874) 00:13:38.626 fused_ordering(875) 00:13:38.626 fused_ordering(876) 00:13:38.626 fused_ordering(877) 00:13:38.626 fused_ordering(878) 00:13:38.626 fused_ordering(879) 00:13:38.626 fused_ordering(880) 00:13:38.626 fused_ordering(881) 00:13:38.626 fused_ordering(882) 00:13:38.626 fused_ordering(883) 00:13:38.626 fused_ordering(884) 00:13:38.626 fused_ordering(885) 00:13:38.626 fused_ordering(886) 00:13:38.626 fused_ordering(887) 00:13:38.626 fused_ordering(888) 00:13:38.626 fused_ordering(889) 00:13:38.626 fused_ordering(890) 00:13:38.626 fused_ordering(891) 00:13:38.626 fused_ordering(892) 00:13:38.626 fused_ordering(893) 00:13:38.626 fused_ordering(894) 00:13:38.626 fused_ordering(895) 00:13:38.626 fused_ordering(896) 00:13:38.626 fused_ordering(897) 00:13:38.626 fused_ordering(898) 00:13:38.626 fused_ordering(899) 00:13:38.626 fused_ordering(900) 00:13:38.626 fused_ordering(901) 00:13:38.626 fused_ordering(902) 00:13:38.626 fused_ordering(903) 00:13:38.626 fused_ordering(904) 00:13:38.626 fused_ordering(905) 00:13:38.626 fused_ordering(906) 00:13:38.626 fused_ordering(907) 00:13:38.626 fused_ordering(908) 00:13:38.626 fused_ordering(909) 00:13:38.626 fused_ordering(910) 00:13:38.626 fused_ordering(911) 00:13:38.626 fused_ordering(912) 00:13:38.626 fused_ordering(913) 00:13:38.626 fused_ordering(914) 00:13:38.626 fused_ordering(915) 00:13:38.626 fused_ordering(916) 00:13:38.626 fused_ordering(917) 00:13:38.626 fused_ordering(918) 00:13:38.626 fused_ordering(919) 00:13:38.626 fused_ordering(920) 00:13:38.626 fused_ordering(921) 00:13:38.626 fused_ordering(922) 00:13:38.626 fused_ordering(923) 00:13:38.626 fused_ordering(924) 00:13:38.626 fused_ordering(925) 00:13:38.626 fused_ordering(926) 00:13:38.626 fused_ordering(927) 00:13:38.626 fused_ordering(928) 00:13:38.626 fused_ordering(929) 00:13:38.626 fused_ordering(930) 00:13:38.626 fused_ordering(931) 00:13:38.626 fused_ordering(932) 00:13:38.626 fused_ordering(933) 00:13:38.626 fused_ordering(934) 00:13:38.626 fused_ordering(935) 00:13:38.626 fused_ordering(936) 00:13:38.626 fused_ordering(937) 00:13:38.626 fused_ordering(938) 00:13:38.626 fused_ordering(939) 00:13:38.626 fused_ordering(940) 00:13:38.626 fused_ordering(941) 00:13:38.626 fused_ordering(942) 00:13:38.626 fused_ordering(943) 00:13:38.626 fused_ordering(944) 00:13:38.626 fused_ordering(945) 00:13:38.626 fused_ordering(946) 00:13:38.626 fused_ordering(947) 00:13:38.626 fused_ordering(948) 00:13:38.626 fused_ordering(949) 00:13:38.626 fused_ordering(950) 00:13:38.626 fused_ordering(951) 00:13:38.626 fused_ordering(952) 00:13:38.626 fused_ordering(953) 00:13:38.626 fused_ordering(954) 00:13:38.626 fused_ordering(955) 00:13:38.626 fused_ordering(956) 00:13:38.626 fused_ordering(957) 00:13:38.626 fused_ordering(958) 00:13:38.626 fused_ordering(959) 00:13:38.626 fused_ordering(960) 00:13:38.626 fused_ordering(961) 00:13:38.626 fused_ordering(962) 00:13:38.626 fused_ordering(963) 00:13:38.626 fused_ordering(964) 00:13:38.626 fused_ordering(965) 00:13:38.626 fused_ordering(966) 00:13:38.626 fused_ordering(967) 00:13:38.626 fused_ordering(968) 00:13:38.626 fused_ordering(969) 00:13:38.626 fused_ordering(970) 00:13:38.626 fused_ordering(971) 00:13:38.626 fused_ordering(972) 00:13:38.626 fused_ordering(973) 00:13:38.626 fused_ordering(974) 00:13:38.626 fused_ordering(975) 00:13:38.626 fused_ordering(976) 00:13:38.626 fused_ordering(977) 00:13:38.626 fused_ordering(978) 00:13:38.626 fused_ordering(979) 00:13:38.626 fused_ordering(980) 00:13:38.626 fused_ordering(981) 00:13:38.626 fused_ordering(982) 00:13:38.626 fused_ordering(983) 00:13:38.626 fused_ordering(984) 00:13:38.626 fused_ordering(985) 00:13:38.626 fused_ordering(986) 00:13:38.626 fused_ordering(987) 00:13:38.626 fused_ordering(988) 00:13:38.626 fused_ordering(989) 00:13:38.626 fused_ordering(990) 00:13:38.626 fused_ordering(991) 00:13:38.626 fused_ordering(992) 00:13:38.626 fused_ordering(993) 00:13:38.626 fused_ordering(994) 00:13:38.626 fused_ordering(995) 00:13:38.626 fused_ordering(996) 00:13:38.626 fused_ordering(997) 00:13:38.626 fused_ordering(998) 00:13:38.626 fused_ordering(999) 00:13:38.626 fused_ordering(1000) 00:13:38.626 fused_ordering(1001) 00:13:38.626 fused_ordering(1002) 00:13:38.626 fused_ordering(1003) 00:13:38.626 fused_ordering(1004) 00:13:38.626 fused_ordering(1005) 00:13:38.626 fused_ordering(1006) 00:13:38.626 fused_ordering(1007) 00:13:38.626 fused_ordering(1008) 00:13:38.626 fused_ordering(1009) 00:13:38.626 fused_ordering(1010) 00:13:38.626 fused_ordering(1011) 00:13:38.626 fused_ordering(1012) 00:13:38.626 fused_ordering(1013) 00:13:38.626 fused_ordering(1014) 00:13:38.626 fused_ordering(1015) 00:13:38.626 fused_ordering(1016) 00:13:38.626 fused_ordering(1017) 00:13:38.626 fused_ordering(1018) 00:13:38.626 fused_ordering(1019) 00:13:38.626 fused_ordering(1020) 00:13:38.627 fused_ordering(1021) 00:13:38.627 fused_ordering(1022) 00:13:38.627 fused_ordering(1023) 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.627 rmmod nvme_tcp 00:13:38.627 rmmod nvme_fabrics 00:13:38.627 rmmod nvme_keyring 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3526015 ']' 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3526015 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3526015 ']' 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3526015 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.627 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3526015 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3526015' 00:13:38.886 killing process with pid 3526015 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3526015 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3526015 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.886 13:21:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.426 13:21:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.426 00:13:41.426 real 0m7.741s 00:13:41.426 user 0m5.294s 00:13:41.426 sys 0m3.428s 00:13:41.426 13:21:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.426 13:21:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.426 ************************************ 00:13:41.426 END TEST nvmf_fused_ordering 00:13:41.426 ************************************ 00:13:41.426 13:21:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.426 13:21:38 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:41.426 13:21:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.426 13:21:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.426 13:21:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.426 ************************************ 00:13:41.426 START TEST nvmf_delete_subsystem 00:13:41.426 ************************************ 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:41.426 * Looking for test storage... 00:13:41.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.426 13:21:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:43.335 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:43.335 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.335 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:43.336 Found net devices under 0000:09:00.0: cvl_0_0 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:43.336 Found net devices under 0000:09:00.1: cvl_0_1 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:13:43.336 00:13:43.336 --- 10.0.0.2 ping statistics --- 00:13:43.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.336 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:43.336 00:13:43.336 --- 10.0.0.1 ping statistics --- 00:13:43.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.336 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3528354 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3528354 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3528354 ']' 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.336 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.336 [2024-07-12 13:21:40.731114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:13:43.336 [2024-07-12 13:21:40.731206] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.336 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.336 [2024-07-12 13:21:40.767957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.336 [2024-07-12 13:21:40.795150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.595 [2024-07-12 13:21:40.881651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.595 [2024-07-12 13:21:40.881696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.595 [2024-07-12 13:21:40.881719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.595 [2024-07-12 13:21:40.881730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.595 [2024-07-12 13:21:40.881739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.595 [2024-07-12 13:21:40.881806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.595 [2024-07-12 13:21:40.881811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.595 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.595 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:43.595 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.595 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.595 13:21:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 [2024-07-12 13:21:41.023096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 [2024-07-12 13:21:41.039275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 NULL1 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 Delay0 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.595 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.853 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.853 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3528380 00:13:43.853 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:43.853 13:21:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:43.853 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.853 [2024-07-12 13:21:41.114058] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:45.748 13:21:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.748 13:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.748 13:21:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 [2024-07-12 13:21:43.285913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe54c000c00 is same with the state(5) to be set 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 starting I/O failed: -6 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Write completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.006 Read completed with error (sct=0, sc=8) 00:13:46.007 starting I/O failed: -6 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 starting I/O failed: -6 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 starting I/O failed: -6 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 [2024-07-12 13:21:43.286758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8300 is same with the state(5) to be set 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.007 Write completed with error (sct=0, sc=8) 00:13:46.007 Read completed with error (sct=0, sc=8) 00:13:46.939 [2024-07-12 13:21:44.252804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dfb40 is same with the state(5) to be set 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 [2024-07-12 13:21:44.287757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe54c00cfe0 is same with the state(5) to be set 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 [2024-07-12 13:21:44.288028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c1d40 is same with the state(5) to be set 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 [2024-07-12 13:21:44.288487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe54c00d600 is same with the state(5) to be set 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 Write completed with error (sct=0, sc=8) 00:13:46.939 Read completed with error (sct=0, sc=8) 00:13:46.939 [2024-07-12 13:21:44.288850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c2100 is same with the state(5) to be set 00:13:46.939 Initializing NVMe Controllers 00:13:46.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.939 Controller IO queue size 128, less than required. 00:13:46.939 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:46.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:46.939 Initialization complete. Launching workers. 00:13:46.939 ======================================================== 00:13:46.939 Latency(us) 00:13:46.939 Device Information : IOPS MiB/s Average min max 00:13:46.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.75 0.08 906643.43 394.29 1013511.26 00:13:46.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.23 0.08 901869.65 722.82 1013226.28 00:13:46.939 ======================================================== 00:13:46.939 Total : 331.98 0.16 904238.70 394.29 1013511.26 00:13:46.939 00:13:46.939 [2024-07-12 13:21:44.289686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5dfb40 (9): Bad file descriptor 00:13:46.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:46.939 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.939 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:46.939 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3528380 00:13:46.939 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3528380 00:13:47.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3528380) - No such process 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3528380 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3528380 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3528380 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.504 [2024-07-12 13:21:44.812765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3528787 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:47.504 13:21:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:47.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.504 [2024-07-12 13:21:44.878070] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:48.068 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.068 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:48.068 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.631 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.631 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:48.631 13:21:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.888 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.888 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:48.888 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:49.452 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:49.452 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:49.452 13:21:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:50.017 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:50.017 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:50.017 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:50.581 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:50.581 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:50.581 13:21:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:50.581 Initializing NVMe Controllers 00:13:50.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:50.581 Controller IO queue size 128, less than required. 00:13:50.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:50.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:50.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:50.582 Initialization complete. Launching workers. 00:13:50.582 ======================================================== 00:13:50.582 Latency(us) 00:13:50.582 Device Information : IOPS MiB/s Average min max 00:13:50.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003935.10 1000243.56 1011023.02 00:13:50.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005599.96 1000297.46 1041976.99 00:13:50.582 ======================================================== 00:13:50.582 Total : 256.00 0.12 1004767.53 1000243.56 1041976.99 00:13:50.582 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3528787 00:13:51.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3528787) - No such process 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3528787 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.152 rmmod nvme_tcp 00:13:51.152 rmmod nvme_fabrics 00:13:51.152 rmmod nvme_keyring 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3528354 ']' 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3528354 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3528354 ']' 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3528354 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3528354 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3528354' 00:13:51.152 killing process with pid 3528354 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3528354 00:13:51.152 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3528354 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.461 13:21:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.370 13:21:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.370 00:13:53.370 real 0m12.303s 00:13:53.370 user 0m27.786s 00:13:53.370 sys 0m2.991s 00:13:53.370 13:21:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.370 13:21:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.370 ************************************ 00:13:53.370 END TEST nvmf_delete_subsystem 00:13:53.370 ************************************ 00:13:53.370 13:21:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.370 13:21:50 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:53.370 13:21:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.370 13:21:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.370 13:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.370 ************************************ 00:13:53.370 START TEST nvmf_ns_masking 00:13:53.370 ************************************ 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:53.370 * Looking for test storage... 00:13:53.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6cc3b035-a5ff-4bda-b06a-aa993afeca89 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=df36c4d0-3356-4e79-96cd-610f1f9e9f26 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=40224b6b-de32-49d8-8f97-bf986914ab14 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.370 13:21:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:55.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:55.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:55.904 Found net devices under 0000:09:00.0: cvl_0_0 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:55.904 Found net devices under 0000:09:00.1: cvl_0_1 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.904 13:21:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:13:55.904 00:13:55.904 --- 10.0.0.2 ping statistics --- 00:13:55.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.904 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:55.904 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:13:55.904 00:13:55.904 --- 10.0.0.1 ping statistics --- 00:13:55.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.904 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3531245 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3531245 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3531245 ']' 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.905 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.905 [2024-07-12 13:21:53.155671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:13:55.905 [2024-07-12 13:21:53.155746] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.905 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.905 [2024-07-12 13:21:53.194578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:55.905 [2024-07-12 13:21:53.220454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.905 [2024-07-12 13:21:53.309296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.905 [2024-07-12 13:21:53.309364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.905 [2024-07-12 13:21:53.309380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.905 [2024-07-12 13:21:53.309391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.905 [2024-07-12 13:21:53.309401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.905 [2024-07-12 13:21:53.309435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.163 13:21:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.421 [2024-07-12 13:21:53.719954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.421 13:21:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:56.421 13:21:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:56.421 13:21:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:56.680 Malloc1 00:13:56.680 13:21:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:56.938 Malloc2 00:13:56.938 13:21:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:57.196 13:21:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:57.454 13:21:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.711 [2024-07-12 13:21:55.103633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.711 13:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:57.711 13:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40224b6b-de32-49d8-8f97-bf986914ab14 -a 10.0.0.2 -s 4420 -i 4 00:13:57.969 13:21:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.969 13:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:57.969 13:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.969 13:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:57.969 13:21:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:59.878 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.138 [ 0]:0x1 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9634610781df419891fad7a65d8d81c0 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9634610781df419891fad7a65d8d81c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.138 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.395 [ 0]:0x1 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9634610781df419891fad7a65d8d81c0 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9634610781df419891fad7a65d8d81c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.395 [ 1]:0x2 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:00.395 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.653 13:21:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.911 13:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40224b6b-de32-49d8-8f97-bf986914ab14 -a 10.0.0.2 -s 4420 -i 4 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:01.168 13:21:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:03.698 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.699 [ 0]:0x2 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.699 [ 0]:0x1 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.699 13:22:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9634610781df419891fad7a65d8d81c0 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9634610781df419891fad7a65d8d81c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.699 [ 1]:0x2 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.699 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.957 [ 0]:0x2 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:03.957 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.213 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40224b6b-de32-49d8-8f97-bf986914ab14 -a 10.0.0.2 -s 4420 -i 4 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:04.471 13:22:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.996 [ 0]:0x1 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.996 13:22:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.996 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9634610781df419891fad7a65d8d81c0 00:14:06.996 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9634610781df419891fad7a65d8d81c0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.996 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.997 [ 1]:0x2 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.997 [ 0]:0x2 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:06.997 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:07.254 [2024-07-12 13:22:04.643928] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:07.254 request: 00:14:07.254 { 00:14:07.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.254 "nsid": 2, 00:14:07.254 "host": "nqn.2016-06.io.spdk:host1", 00:14:07.254 "method": "nvmf_ns_remove_host", 00:14:07.254 "req_id": 1 00:14:07.254 } 00:14:07.254 Got JSON-RPC error response 00:14:07.254 response: 00:14:07.254 { 00:14:07.254 "code": -32602, 00:14:07.254 "message": "Invalid parameters" 00:14:07.254 } 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.254 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.512 [ 0]:0x2 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c0efa3a52a54ed781bd0fb6bfc80adf 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c0efa3a52a54ed781bd0fb6bfc80adf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3532752 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3532752 /var/tmp/host.sock 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3532752 ']' 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:07.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.512 13:22:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.770 [2024-07-12 13:22:04.997797] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:14:07.770 [2024-07-12 13:22:04.997892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532752 ] 00:14:07.770 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.770 [2024-07-12 13:22:05.029882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:07.770 [2024-07-12 13:22:05.057206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.770 [2024-07-12 13:22:05.143551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.028 13:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.028 13:22:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:08.028 13:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.286 13:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.543 13:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6cc3b035-a5ff-4bda-b06a-aa993afeca89 00:14:08.544 13:22:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:08.544 13:22:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6CC3B035A5FF4BDAB06AAA993AFECA89 -i 00:14:08.801 13:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid df36c4d0-3356-4e79-96cd-610f1f9e9f26 00:14:08.801 13:22:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:08.801 13:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DF36C4D033564E7996CD610F1F9E9F26 -i 00:14:09.059 13:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.339 13:22:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:09.607 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:09.607 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:10.172 nvme0n1 00:14:10.172 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:10.172 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:10.429 nvme1n2 00:14:10.429 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:10.429 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:10.429 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:10.429 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:10.429 13:22:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:10.687 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:10.687 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:10.687 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:10.687 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:10.944 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6cc3b035-a5ff-4bda-b06a-aa993afeca89 == \6\c\c\3\b\0\3\5\-\a\5\f\f\-\4\b\d\a\-\b\0\6\a\-\a\a\9\9\3\a\f\e\c\a\8\9 ]] 00:14:10.944 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:10.944 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:10.944 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ df36c4d0-3356-4e79-96cd-610f1f9e9f26 == \d\f\3\6\c\4\d\0\-\3\3\5\6\-\4\e\7\9\-\9\6\c\d\-\6\1\0\f\1\f\9\e\9\f\2\6 ]] 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3532752 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3532752 ']' 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3532752 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3532752 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3532752' 00:14:11.203 killing process with pid 3532752 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3532752 00:14:11.203 13:22:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3532752 00:14:11.768 13:22:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.026 rmmod nvme_tcp 00:14:12.026 rmmod nvme_fabrics 00:14:12.026 rmmod nvme_keyring 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3531245 ']' 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3531245 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3531245 ']' 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3531245 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3531245 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3531245' 00:14:12.026 killing process with pid 3531245 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3531245 00:14:12.026 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3531245 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.286 13:22:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.191 13:22:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.191 00:14:14.191 real 0m20.905s 00:14:14.191 user 0m27.065s 00:14:14.191 sys 0m4.178s 00:14:14.191 13:22:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.191 13:22:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.191 ************************************ 00:14:14.191 END TEST nvmf_ns_masking 00:14:14.191 ************************************ 00:14:14.448 13:22:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:14.448 13:22:11 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:14.448 13:22:11 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:14.448 13:22:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:14.448 13:22:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.448 13:22:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.448 ************************************ 00:14:14.448 START TEST nvmf_nvme_cli 00:14:14.448 ************************************ 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:14.448 * Looking for test storage... 00:14:14.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.448 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.449 13:22:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.977 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:16.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:16.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:16.978 Found net devices under 0000:09:00.0: cvl_0_0 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:16.978 Found net devices under 0000:09:00.1: cvl_0_1 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.978 13:22:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:16.978 00:14:16.978 --- 10.0.0.2 ping statistics --- 00:14:16.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.978 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:14:16.978 00:14:16.978 --- 10.0.0.1 ping statistics --- 00:14:16.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.978 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.978 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3535242 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3535242 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3535242 ']' 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.979 [2024-07-12 13:22:14.143114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:14:16.979 [2024-07-12 13:22:14.143180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.979 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.979 [2024-07-12 13:22:14.179387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:16.979 [2024-07-12 13:22:14.204239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.979 [2024-07-12 13:22:14.286622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.979 [2024-07-12 13:22:14.286689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.979 [2024-07-12 13:22:14.286710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.979 [2024-07-12 13:22:14.286721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.979 [2024-07-12 13:22:14.286731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.979 [2024-07-12 13:22:14.286827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.979 [2024-07-12 13:22:14.286890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.979 [2024-07-12 13:22:14.286956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.979 [2024-07-12 13:22:14.286958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:16.979 [2024-07-12 13:22:14.439130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.979 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 Malloc0 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 Malloc1 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 [2024-07-12 13:22:14.524566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:17.237 00:14:17.237 Discovery Log Number of Records 2, Generation counter 2 00:14:17.237 =====Discovery Log Entry 0====== 00:14:17.237 trtype: tcp 00:14:17.237 adrfam: ipv4 00:14:17.237 subtype: current discovery subsystem 00:14:17.237 treq: not required 00:14:17.237 portid: 0 00:14:17.237 trsvcid: 4420 00:14:17.237 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:17.237 traddr: 10.0.0.2 00:14:17.237 eflags: explicit discovery connections, duplicate discovery information 00:14:17.237 sectype: none 00:14:17.237 =====Discovery Log Entry 1====== 00:14:17.237 trtype: tcp 00:14:17.237 adrfam: ipv4 00:14:17.237 subtype: nvme subsystem 00:14:17.237 treq: not required 00:14:17.237 portid: 0 00:14:17.237 trsvcid: 4420 00:14:17.237 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:17.237 traddr: 10.0.0.2 00:14:17.237 eflags: none 00:14:17.237 sectype: none 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:17.237 13:22:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:17.801 13:22:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:20.322 /dev/nvme0n1 ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:20.322 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.580 rmmod nvme_tcp 00:14:20.580 rmmod nvme_fabrics 00:14:20.580 rmmod nvme_keyring 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3535242 ']' 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3535242 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3535242 ']' 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3535242 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3535242 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3535242' 00:14:20.580 killing process with pid 3535242 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3535242 00:14:20.580 13:22:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3535242 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.838 13:22:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.374 13:22:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.374 00:14:23.374 real 0m8.520s 00:14:23.374 user 0m15.988s 00:14:23.374 sys 0m2.345s 00:14:23.374 13:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.374 13:22:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.374 ************************************ 00:14:23.374 END TEST nvmf_nvme_cli 00:14:23.374 ************************************ 00:14:23.374 13:22:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:23.374 13:22:20 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:23.374 13:22:20 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:23.374 13:22:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.374 13:22:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.374 13:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.374 ************************************ 00:14:23.374 START TEST nvmf_vfio_user 00:14:23.374 ************************************ 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:23.374 * Looking for test storage... 00:14:23.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3536109 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3536109' 00:14:23.374 Process pid: 3536109 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3536109 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3536109 ']' 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:23.374 [2024-07-12 13:22:20.407164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:14:23.374 [2024-07-12 13:22:20.407260] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.374 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.374 [2024-07-12 13:22:20.441169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:23.374 [2024-07-12 13:22:20.467984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.374 [2024-07-12 13:22:20.552345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.374 [2024-07-12 13:22:20.552411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.374 [2024-07-12 13:22:20.552434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.374 [2024-07-12 13:22:20.552445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.374 [2024-07-12 13:22:20.552455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.374 [2024-07-12 13:22:20.552521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.374 [2024-07-12 13:22:20.552543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.374 [2024-07-12 13:22:20.552611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.374 [2024-07-12 13:22:20.552613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:23.374 13:22:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:24.306 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:24.564 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:24.564 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:24.564 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.564 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:24.564 13:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:24.822 Malloc1 00:14:24.822 13:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:25.079 13:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:25.336 13:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:25.594 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.594 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:25.594 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:25.850 Malloc2 00:14:25.850 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:26.106 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:26.371 13:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:26.672 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:26.672 [2024-07-12 13:22:24.044579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:14:26.672 [2024-07-12 13:22:24.044634] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536580 ] 00:14:26.672 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.672 [2024-07-12 13:22:24.061087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:26.672 [2024-07-12 13:22:24.078957] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:26.672 [2024-07-12 13:22:24.091720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:26.672 [2024-07-12 13:22:24.091754] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd810f21000 00:14:26.672 [2024-07-12 13:22:24.092715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.093707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.094713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.095718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.096725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.097736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.098737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.099740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:26.672 [2024-07-12 13:22:24.100750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:26.672 [2024-07-12 13:22:24.100770] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd80fce3000 00:14:26.672 [2024-07-12 13:22:24.101923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:26.672 [2024-07-12 13:22:24.117751] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:26.672 [2024-07-12 13:22:24.117788] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:26.672 [2024-07-12 13:22:24.119864] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:26.672 [2024-07-12 13:22:24.119917] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:26.672 [2024-07-12 13:22:24.120009] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:26.672 [2024-07-12 13:22:24.120038] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:26.672 [2024-07-12 13:22:24.120049] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:26.672 [2024-07-12 13:22:24.120852] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:26.672 [2024-07-12 13:22:24.120872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:26.672 [2024-07-12 13:22:24.120884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:26.672 [2024-07-12 13:22:24.121856] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:26.672 [2024-07-12 13:22:24.121874] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:26.672 [2024-07-12 13:22:24.121887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:26.672 [2024-07-12 13:22:24.122860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:26.672 [2024-07-12 13:22:24.122880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:26.672 [2024-07-12 13:22:24.123866] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:26.673 [2024-07-12 13:22:24.123884] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:26.673 [2024-07-12 13:22:24.123893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:26.673 [2024-07-12 13:22:24.123909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:26.673 [2024-07-12 13:22:24.124020] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:26.673 [2024-07-12 13:22:24.124029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:26.673 [2024-07-12 13:22:24.124037] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:26.673 [2024-07-12 13:22:24.124874] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:26.931 [2024-07-12 13:22:24.125881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:26.931 [2024-07-12 13:22:24.126884] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:26.931 [2024-07-12 13:22:24.127883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.931 [2024-07-12 13:22:24.128333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:26.931 [2024-07-12 13:22:24.128898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:26.931 [2024-07-12 13:22:24.128915] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:26.931 [2024-07-12 13:22:24.128924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.128948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:26.931 [2024-07-12 13:22:24.128965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.128989] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.931 [2024-07-12 13:22:24.128998] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.931 [2024-07-12 13:22:24.129016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.931 [2024-07-12 13:22:24.129092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:26.931 [2024-07-12 13:22:24.129107] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:26.931 [2024-07-12 13:22:24.129119] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:26.931 [2024-07-12 13:22:24.129127] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:26.931 [2024-07-12 13:22:24.129135] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:26.931 [2024-07-12 13:22:24.129142] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:26.931 [2024-07-12 13:22:24.129150] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:26.931 [2024-07-12 13:22:24.129158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:26.931 [2024-07-12 13:22:24.129204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:26.931 [2024-07-12 13:22:24.129224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.931 [2024-07-12 13:22:24.129238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.931 [2024-07-12 13:22:24.129249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.931 [2024-07-12 13:22:24.129261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.931 [2024-07-12 13:22:24.129269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:26.931 [2024-07-12 13:22:24.129337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:26.931 [2024-07-12 13:22:24.129347] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:26.931 [2024-07-12 13:22:24.129356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:26.931 [2024-07-12 13:22:24.129413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:26.931 [2024-07-12 13:22:24.129475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:26.931 [2024-07-12 13:22:24.129503] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:26.931 [2024-07-12 13:22:24.129511] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:26.931 [2024-07-12 13:22:24.129521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:26.931 [2024-07-12 13:22:24.129537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:26.931 [2024-07-12 13:22:24.129552] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:26.931 [2024-07-12 13:22:24.129568] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129601] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.932 [2024-07-12 13:22:24.129609] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.932 [2024-07-12 13:22:24.129636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129721] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:26.932 [2024-07-12 13:22:24.129729] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.932 [2024-07-12 13:22:24.129739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129824] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:26.932 [2024-07-12 13:22:24.129832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:26.932 [2024-07-12 13:22:24.129840] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:26.932 [2024-07-12 13:22:24.129864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.129970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.129991] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:26.932 [2024-07-12 13:22:24.130000] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:26.932 [2024-07-12 13:22:24.130007] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:26.932 [2024-07-12 13:22:24.130013] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:26.932 [2024-07-12 13:22:24.130022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:26.932 [2024-07-12 13:22:24.130033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:26.932 [2024-07-12 13:22:24.130041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:26.932 [2024-07-12 13:22:24.130050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.130060] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:26.932 [2024-07-12 13:22:24.130068] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:26.932 [2024-07-12 13:22:24.130076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.130088] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:26.932 [2024-07-12 13:22:24.130096] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:26.932 [2024-07-12 13:22:24.130104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:26.932 [2024-07-12 13:22:24.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.130135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.130153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:26.932 [2024-07-12 13:22:24.130164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:26.932 ===================================================== 00:14:26.932 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:26.932 ===================================================== 00:14:26.932 Controller Capabilities/Features 00:14:26.932 ================================ 00:14:26.932 Vendor ID: 4e58 00:14:26.932 Subsystem Vendor ID: 4e58 00:14:26.932 Serial Number: SPDK1 00:14:26.932 Model Number: SPDK bdev Controller 00:14:26.932 Firmware Version: 24.09 00:14:26.932 Recommended Arb Burst: 6 00:14:26.932 IEEE OUI Identifier: 8d 6b 50 00:14:26.932 Multi-path I/O 00:14:26.932 May have multiple subsystem ports: Yes 00:14:26.932 May have multiple controllers: Yes 00:14:26.932 Associated with SR-IOV VF: No 00:14:26.932 Max Data Transfer Size: 131072 00:14:26.932 Max Number of Namespaces: 32 00:14:26.932 Max Number of I/O Queues: 127 00:14:26.932 NVMe Specification Version (VS): 1.3 00:14:26.932 NVMe Specification Version (Identify): 1.3 00:14:26.932 Maximum Queue Entries: 256 00:14:26.932 Contiguous Queues Required: Yes 00:14:26.932 Arbitration Mechanisms Supported 00:14:26.932 Weighted Round Robin: Not Supported 00:14:26.932 Vendor Specific: Not Supported 00:14:26.932 Reset Timeout: 15000 ms 00:14:26.932 Doorbell Stride: 4 bytes 00:14:26.932 NVM Subsystem Reset: Not Supported 00:14:26.932 Command Sets Supported 00:14:26.932 NVM Command Set: Supported 00:14:26.932 Boot Partition: Not Supported 00:14:26.932 Memory Page Size Minimum: 4096 bytes 00:14:26.932 Memory Page Size Maximum: 4096 bytes 00:14:26.932 Persistent Memory Region: Not Supported 00:14:26.932 Optional Asynchronous Events Supported 00:14:26.932 Namespace Attribute Notices: Supported 00:14:26.932 Firmware Activation Notices: Not Supported 00:14:26.932 ANA Change Notices: Not Supported 00:14:26.932 PLE Aggregate Log Change Notices: Not Supported 00:14:26.932 LBA Status Info Alert Notices: Not Supported 00:14:26.932 EGE Aggregate Log Change Notices: Not Supported 00:14:26.932 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.932 Zone Descriptor Change Notices: Not Supported 00:14:26.932 Discovery Log Change Notices: Not Supported 00:14:26.932 Controller Attributes 00:14:26.932 128-bit Host Identifier: Supported 00:14:26.932 Non-Operational Permissive Mode: Not Supported 00:14:26.932 NVM Sets: Not Supported 00:14:26.932 Read Recovery Levels: Not Supported 00:14:26.932 Endurance Groups: Not Supported 00:14:26.932 Predictable Latency Mode: Not Supported 00:14:26.932 Traffic Based Keep ALive: Not Supported 00:14:26.932 Namespace Granularity: Not Supported 00:14:26.932 SQ Associations: Not Supported 00:14:26.932 UUID List: Not Supported 00:14:26.932 Multi-Domain Subsystem: Not Supported 00:14:26.932 Fixed Capacity Management: Not Supported 00:14:26.932 Variable Capacity Management: Not Supported 00:14:26.932 Delete Endurance Group: Not Supported 00:14:26.932 Delete NVM Set: Not Supported 00:14:26.932 Extended LBA Formats Supported: Not Supported 00:14:26.932 Flexible Data Placement Supported: Not Supported 00:14:26.932 00:14:26.932 Controller Memory Buffer Support 00:14:26.932 ================================ 00:14:26.932 Supported: No 00:14:26.932 00:14:26.932 Persistent Memory Region Support 00:14:26.932 ================================ 00:14:26.932 Supported: No 00:14:26.932 00:14:26.932 Admin Command Set Attributes 00:14:26.932 ============================ 00:14:26.932 Security Send/Receive: Not Supported 00:14:26.932 Format NVM: Not Supported 00:14:26.932 Firmware Activate/Download: Not Supported 00:14:26.932 Namespace Management: Not Supported 00:14:26.932 Device Self-Test: Not Supported 00:14:26.932 Directives: Not Supported 00:14:26.932 NVMe-MI: Not Supported 00:14:26.932 Virtualization Management: Not Supported 00:14:26.932 Doorbell Buffer Config: Not Supported 00:14:26.932 Get LBA Status Capability: Not Supported 00:14:26.932 Command & Feature Lockdown Capability: Not Supported 00:14:26.932 Abort Command Limit: 4 00:14:26.932 Async Event Request Limit: 4 00:14:26.932 Number of Firmware Slots: N/A 00:14:26.932 Firmware Slot 1 Read-Only: N/A 00:14:26.932 Firmware Activation Without Reset: N/A 00:14:26.932 Multiple Update Detection Support: N/A 00:14:26.932 Firmware Update Granularity: No Information Provided 00:14:26.932 Per-Namespace SMART Log: No 00:14:26.932 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.932 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:26.932 Command Effects Log Page: Supported 00:14:26.932 Get Log Page Extended Data: Supported 00:14:26.932 Telemetry Log Pages: Not Supported 00:14:26.933 Persistent Event Log Pages: Not Supported 00:14:26.933 Supported Log Pages Log Page: May Support 00:14:26.933 Commands Supported & Effects Log Page: Not Supported 00:14:26.933 Feature Identifiers & Effects Log Page:May Support 00:14:26.933 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.933 Data Area 4 for Telemetry Log: Not Supported 00:14:26.933 Error Log Page Entries Supported: 128 00:14:26.933 Keep Alive: Supported 00:14:26.933 Keep Alive Granularity: 10000 ms 00:14:26.933 00:14:26.933 NVM Command Set Attributes 00:14:26.933 ========================== 00:14:26.933 Submission Queue Entry Size 00:14:26.933 Max: 64 00:14:26.933 Min: 64 00:14:26.933 Completion Queue Entry Size 00:14:26.933 Max: 16 00:14:26.933 Min: 16 00:14:26.933 Number of Namespaces: 32 00:14:26.933 Compare Command: Supported 00:14:26.933 Write Uncorrectable Command: Not Supported 00:14:26.933 Dataset Management Command: Supported 00:14:26.933 Write Zeroes Command: Supported 00:14:26.933 Set Features Save Field: Not Supported 00:14:26.933 Reservations: Not Supported 00:14:26.933 Timestamp: Not Supported 00:14:26.933 Copy: Supported 00:14:26.933 Volatile Write Cache: Present 00:14:26.933 Atomic Write Unit (Normal): 1 00:14:26.933 Atomic Write Unit (PFail): 1 00:14:26.933 Atomic Compare & Write Unit: 1 00:14:26.933 Fused Compare & Write: Supported 00:14:26.933 Scatter-Gather List 00:14:26.933 SGL Command Set: Supported (Dword aligned) 00:14:26.933 SGL Keyed: Not Supported 00:14:26.933 SGL Bit Bucket Descriptor: Not Supported 00:14:26.933 SGL Metadata Pointer: Not Supported 00:14:26.933 Oversized SGL: Not Supported 00:14:26.933 SGL Metadata Address: Not Supported 00:14:26.933 SGL Offset: Not Supported 00:14:26.933 Transport SGL Data Block: Not Supported 00:14:26.933 Replay Protected Memory Block: Not Supported 00:14:26.933 00:14:26.933 Firmware Slot Information 00:14:26.933 ========================= 00:14:26.933 Active slot: 1 00:14:26.933 Slot 1 Firmware Revision: 24.09 00:14:26.933 00:14:26.933 00:14:26.933 Commands Supported and Effects 00:14:26.933 ============================== 00:14:26.933 Admin Commands 00:14:26.933 -------------- 00:14:26.933 Get Log Page (02h): Supported 00:14:26.933 Identify (06h): Supported 00:14:26.933 Abort (08h): Supported 00:14:26.933 Set Features (09h): Supported 00:14:26.933 Get Features (0Ah): Supported 00:14:26.933 Asynchronous Event Request (0Ch): Supported 00:14:26.933 Keep Alive (18h): Supported 00:14:26.933 I/O Commands 00:14:26.933 ------------ 00:14:26.933 Flush (00h): Supported LBA-Change 00:14:26.933 Write (01h): Supported LBA-Change 00:14:26.933 Read (02h): Supported 00:14:26.933 Compare (05h): Supported 00:14:26.933 Write Zeroes (08h): Supported LBA-Change 00:14:26.933 Dataset Management (09h): Supported LBA-Change 00:14:26.933 Copy (19h): Supported LBA-Change 00:14:26.933 00:14:26.933 Error Log 00:14:26.933 ========= 00:14:26.933 00:14:26.933 Arbitration 00:14:26.933 =========== 00:14:26.933 Arbitration Burst: 1 00:14:26.933 00:14:26.933 Power Management 00:14:26.933 ================ 00:14:26.933 Number of Power States: 1 00:14:26.933 Current Power State: Power State #0 00:14:26.933 Power State #0: 00:14:26.933 Max Power: 0.00 W 00:14:26.933 Non-Operational State: Operational 00:14:26.933 Entry Latency: Not Reported 00:14:26.933 Exit Latency: Not Reported 00:14:26.933 Relative Read Throughput: 0 00:14:26.933 Relative Read Latency: 0 00:14:26.933 Relative Write Throughput: 0 00:14:26.933 Relative Write Latency: 0 00:14:26.933 Idle Power: Not Reported 00:14:26.933 Active Power: Not Reported 00:14:26.933 Non-Operational Permissive Mode: Not Supported 00:14:26.933 00:14:26.933 Health Information 00:14:26.933 ================== 00:14:26.933 Critical Warnings: 00:14:26.933 Available Spare Space: OK 00:14:26.933 Temperature: OK 00:14:26.933 Device Reliability: OK 00:14:26.933 Read Only: No 00:14:26.933 Volatile Memory Backup: OK 00:14:26.933 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:26.933 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:26.933 Available Spare: 0% 00:14:26.933 Available Sp[2024-07-12 13:22:24.130283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:26.933 [2024-07-12 13:22:24.130324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:26.933 [2024-07-12 13:22:24.130382] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:26.933 [2024-07-12 13:22:24.130399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.933 [2024-07-12 13:22:24.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.933 [2024-07-12 13:22:24.130420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.933 [2024-07-12 13:22:24.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.933 [2024-07-12 13:22:24.133327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:26.933 [2024-07-12 13:22:24.133349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:26.933 [2024-07-12 13:22:24.133937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.933 [2024-07-12 13:22:24.134014] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:26.933 [2024-07-12 13:22:24.134028] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:26.933 [2024-07-12 13:22:24.134945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:26.933 [2024-07-12 13:22:24.134968] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:26.933 [2024-07-12 13:22:24.135042] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:26.933 [2024-07-12 13:22:24.138330] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:26.933 are Threshold: 0% 00:14:26.933 Life Percentage Used: 0% 00:14:26.933 Data Units Read: 0 00:14:26.933 Data Units Written: 0 00:14:26.933 Host Read Commands: 0 00:14:26.933 Host Write Commands: 0 00:14:26.933 Controller Busy Time: 0 minutes 00:14:26.933 Power Cycles: 0 00:14:26.933 Power On Hours: 0 hours 00:14:26.933 Unsafe Shutdowns: 0 00:14:26.933 Unrecoverable Media Errors: 0 00:14:26.933 Lifetime Error Log Entries: 0 00:14:26.933 Warning Temperature Time: 0 minutes 00:14:26.933 Critical Temperature Time: 0 minutes 00:14:26.933 00:14:26.933 Number of Queues 00:14:26.933 ================ 00:14:26.933 Number of I/O Submission Queues: 127 00:14:26.933 Number of I/O Completion Queues: 127 00:14:26.933 00:14:26.933 Active Namespaces 00:14:26.933 ================= 00:14:26.933 Namespace ID:1 00:14:26.933 Error Recovery Timeout: Unlimited 00:14:26.933 Command Set Identifier: NVM (00h) 00:14:26.933 Deallocate: Supported 00:14:26.933 Deallocated/Unwritten Error: Not Supported 00:14:26.933 Deallocated Read Value: Unknown 00:14:26.933 Deallocate in Write Zeroes: Not Supported 00:14:26.933 Deallocated Guard Field: 0xFFFF 00:14:26.933 Flush: Supported 00:14:26.933 Reservation: Supported 00:14:26.933 Namespace Sharing Capabilities: Multiple Controllers 00:14:26.933 Size (in LBAs): 131072 (0GiB) 00:14:26.933 Capacity (in LBAs): 131072 (0GiB) 00:14:26.933 Utilization (in LBAs): 131072 (0GiB) 00:14:26.933 NGUID: AE32CB81A0D34A61BB151848F6224B2E 00:14:26.933 UUID: ae32cb81-a0d3-4a61-bb15-1848f6224b2e 00:14:26.933 Thin Provisioning: Not Supported 00:14:26.933 Per-NS Atomic Units: Yes 00:14:26.933 Atomic Boundary Size (Normal): 0 00:14:26.933 Atomic Boundary Size (PFail): 0 00:14:26.933 Atomic Boundary Offset: 0 00:14:26.933 Maximum Single Source Range Length: 65535 00:14:26.933 Maximum Copy Length: 65535 00:14:26.933 Maximum Source Range Count: 1 00:14:26.933 NGUID/EUI64 Never Reused: No 00:14:26.933 Namespace Write Protected: No 00:14:26.933 Number of LBA Formats: 1 00:14:26.933 Current LBA Format: LBA Format #00 00:14:26.934 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.934 00:14:26.934 13:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:26.934 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.934 [2024-07-12 13:22:24.380328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.196 Initializing NVMe Controllers 00:14:32.196 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:32.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:32.196 Initialization complete. Launching workers. 00:14:32.196 ======================================================== 00:14:32.196 Latency(us) 00:14:32.197 Device Information : IOPS MiB/s Average min max 00:14:32.197 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33318.39 130.15 3841.51 1173.05 7428.08 00:14:32.197 ======================================================== 00:14:32.197 Total : 33318.39 130.15 3841.51 1173.05 7428.08 00:14:32.197 00:14:32.197 [2024-07-12 13:22:29.406568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.197 13:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:32.197 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.197 [2024-07-12 13:22:29.648764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.457 Initializing NVMe Controllers 00:14:37.457 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.457 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:37.457 Initialization complete. Launching workers. 00:14:37.457 ======================================================== 00:14:37.457 Latency(us) 00:14:37.457 Device Information : IOPS MiB/s Average min max 00:14:37.457 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16032.31 62.63 7989.08 4238.12 14606.59 00:14:37.457 ======================================================== 00:14:37.457 Total : 16032.31 62.63 7989.08 4238.12 14606.59 00:14:37.457 00:14:37.457 [2024-07-12 13:22:34.691206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.457 13:22:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:37.457 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.457 [2024-07-12 13:22:34.905282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.719 [2024-07-12 13:22:39.972622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.719 Initializing NVMe Controllers 00:14:42.719 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.719 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:42.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:42.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:42.719 Initialization complete. Launching workers. 00:14:42.719 Starting thread on core 2 00:14:42.719 Starting thread on core 3 00:14:42.719 Starting thread on core 1 00:14:42.719 13:22:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:42.719 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.976 [2024-07-12 13:22:40.288823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.255 [2024-07-12 13:22:43.359587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.255 Initializing NVMe Controllers 00:14:46.255 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.255 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:46.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:46.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:46.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:46.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:46.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:46.255 Initialization complete. Launching workers. 00:14:46.255 Starting thread on core 1 with urgent priority queue 00:14:46.255 Starting thread on core 2 with urgent priority queue 00:14:46.255 Starting thread on core 3 with urgent priority queue 00:14:46.255 Starting thread on core 0 with urgent priority queue 00:14:46.255 SPDK bdev Controller (SPDK1 ) core 0: 5036.00 IO/s 19.86 secs/100000 ios 00:14:46.255 SPDK bdev Controller (SPDK1 ) core 1: 5252.33 IO/s 19.04 secs/100000 ios 00:14:46.255 SPDK bdev Controller (SPDK1 ) core 2: 5140.33 IO/s 19.45 secs/100000 ios 00:14:46.255 SPDK bdev Controller (SPDK1 ) core 3: 4973.67 IO/s 20.11 secs/100000 ios 00:14:46.255 ======================================================== 00:14:46.255 00:14:46.255 13:22:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:46.255 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.255 [2024-07-12 13:22:43.658840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.255 Initializing NVMe Controllers 00:14:46.255 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.255 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.255 Namespace ID: 1 size: 0GB 00:14:46.255 Initialization complete. 00:14:46.255 INFO: using host memory buffer for IO 00:14:46.255 Hello world! 00:14:46.255 [2024-07-12 13:22:43.693451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.512 13:22:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:46.512 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.771 [2024-07-12 13:22:43.992830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.705 Initializing NVMe Controllers 00:14:47.705 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.705 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.705 Initialization complete. Launching workers. 00:14:47.705 submit (in ns) avg, min, max = 7767.7, 3560.0, 4001701.1 00:14:47.705 complete (in ns) avg, min, max = 24739.4, 2061.1, 4015200.0 00:14:47.705 00:14:47.705 Submit histogram 00:14:47.705 ================ 00:14:47.705 Range in us Cumulative Count 00:14:47.705 3.556 - 3.579: 1.1474% ( 155) 00:14:47.705 3.579 - 3.603: 7.3136% ( 833) 00:14:47.705 3.603 - 3.627: 16.8110% ( 1283) 00:14:47.705 3.627 - 3.650: 26.5453% ( 1315) 00:14:47.705 3.650 - 3.674: 34.9471% ( 1135) 00:14:47.705 3.674 - 3.698: 42.3569% ( 1001) 00:14:47.705 3.698 - 3.721: 49.3893% ( 950) 00:14:47.705 3.721 - 3.745: 55.1336% ( 776) 00:14:47.705 3.745 - 3.769: 58.9385% ( 514) 00:14:47.705 3.769 - 3.793: 62.5435% ( 487) 00:14:47.705 3.793 - 3.816: 65.8006% ( 440) 00:14:47.705 3.816 - 3.840: 69.4352% ( 491) 00:14:47.705 3.840 - 3.864: 73.8915% ( 602) 00:14:47.705 3.864 - 3.887: 77.8814% ( 539) 00:14:47.705 3.887 - 3.911: 81.4642% ( 484) 00:14:47.705 3.911 - 3.935: 84.4400% ( 402) 00:14:47.705 3.935 - 3.959: 86.4091% ( 266) 00:14:47.705 3.959 - 3.982: 88.4373% ( 274) 00:14:47.705 3.982 - 4.006: 89.8956% ( 197) 00:14:47.705 4.006 - 4.030: 91.0356% ( 154) 00:14:47.705 4.030 - 4.053: 92.1386% ( 149) 00:14:47.705 4.053 - 4.077: 93.2045% ( 144) 00:14:47.705 4.077 - 4.101: 94.0262% ( 111) 00:14:47.705 4.101 - 4.124: 94.7813% ( 102) 00:14:47.705 4.124 - 4.148: 95.3512% ( 77) 00:14:47.705 4.148 - 4.172: 95.8250% ( 64) 00:14:47.705 4.172 - 4.196: 96.1359% ( 42) 00:14:47.705 4.196 - 4.219: 96.3876% ( 34) 00:14:47.705 4.219 - 4.243: 96.5579% ( 23) 00:14:47.705 4.243 - 4.267: 96.6837% ( 17) 00:14:47.705 4.267 - 4.290: 96.8391% ( 21) 00:14:47.705 4.290 - 4.314: 96.9428% ( 14) 00:14:47.705 4.314 - 4.338: 97.0538% ( 15) 00:14:47.705 4.338 - 4.361: 97.1352% ( 11) 00:14:47.705 4.361 - 4.385: 97.2093% ( 10) 00:14:47.705 4.385 - 4.409: 97.2463% ( 5) 00:14:47.705 4.409 - 4.433: 97.2759% ( 4) 00:14:47.705 4.433 - 4.456: 97.3203% ( 6) 00:14:47.705 4.456 - 4.480: 97.3425% ( 3) 00:14:47.705 4.480 - 4.504: 97.3943% ( 7) 00:14:47.705 4.551 - 4.575: 97.4091% ( 2) 00:14:47.705 4.575 - 4.599: 97.4165% ( 1) 00:14:47.705 4.599 - 4.622: 97.4313% ( 2) 00:14:47.705 4.622 - 4.646: 97.4461% ( 2) 00:14:47.705 4.646 - 4.670: 97.4758% ( 4) 00:14:47.705 4.670 - 4.693: 97.4906% ( 2) 00:14:47.705 4.693 - 4.717: 97.5498% ( 8) 00:14:47.705 4.717 - 4.741: 97.5794% ( 4) 00:14:47.705 4.741 - 4.764: 97.6164% ( 5) 00:14:47.705 4.764 - 4.788: 97.6608% ( 6) 00:14:47.705 4.788 - 4.812: 97.6904% ( 4) 00:14:47.705 4.812 - 4.836: 97.7496% ( 8) 00:14:47.705 4.836 - 4.859: 97.8089% ( 8) 00:14:47.705 4.859 - 4.883: 97.8533% ( 6) 00:14:47.705 4.883 - 4.907: 97.9051% ( 7) 00:14:47.705 4.907 - 4.930: 97.9347% ( 4) 00:14:47.705 4.930 - 4.954: 97.9569% ( 3) 00:14:47.705 4.954 - 4.978: 98.0087% ( 7) 00:14:47.705 4.978 - 5.001: 98.0457% ( 5) 00:14:47.705 5.001 - 5.025: 98.0680% ( 3) 00:14:47.705 5.025 - 5.049: 98.0976% ( 4) 00:14:47.705 5.049 - 5.073: 98.1198% ( 3) 00:14:47.705 5.073 - 5.096: 98.1272% ( 1) 00:14:47.705 5.096 - 5.120: 98.1494% ( 3) 00:14:47.705 5.167 - 5.191: 98.1568% ( 1) 00:14:47.705 5.191 - 5.215: 98.1864% ( 4) 00:14:47.705 5.215 - 5.239: 98.2086% ( 3) 00:14:47.705 5.286 - 5.310: 98.2234% ( 2) 00:14:47.705 5.333 - 5.357: 98.2308% ( 1) 00:14:47.705 5.476 - 5.499: 98.2382% ( 1) 00:14:47.705 5.499 - 5.523: 98.2456% ( 1) 00:14:47.705 5.784 - 5.807: 98.2604% ( 2) 00:14:47.705 5.855 - 5.879: 98.2678% ( 1) 00:14:47.705 5.926 - 5.950: 98.2752% ( 1) 00:14:47.705 5.973 - 5.997: 98.2826% ( 1) 00:14:47.705 5.997 - 6.021: 98.2900% ( 1) 00:14:47.705 6.068 - 6.116: 98.3048% ( 2) 00:14:47.705 6.210 - 6.258: 98.3122% ( 1) 00:14:47.705 6.305 - 6.353: 98.3270% ( 2) 00:14:47.705 6.590 - 6.637: 98.3344% ( 1) 00:14:47.705 6.684 - 6.732: 98.3418% ( 1) 00:14:47.705 6.874 - 6.921: 98.3492% ( 1) 00:14:47.705 6.969 - 7.016: 98.3641% ( 2) 00:14:47.705 7.111 - 7.159: 98.3715% ( 1) 00:14:47.706 7.253 - 7.301: 98.3789% ( 1) 00:14:47.706 7.490 - 7.538: 98.3863% ( 1) 00:14:47.706 7.585 - 7.633: 98.3937% ( 1) 00:14:47.706 7.680 - 7.727: 98.4011% ( 1) 00:14:47.706 7.727 - 7.775: 98.4233% ( 3) 00:14:47.706 7.822 - 7.870: 98.4307% ( 1) 00:14:47.706 7.917 - 7.964: 98.4381% ( 1) 00:14:47.706 7.964 - 8.012: 98.4455% ( 1) 00:14:47.706 8.059 - 8.107: 98.4529% ( 1) 00:14:47.706 8.107 - 8.154: 98.4677% ( 2) 00:14:47.706 8.154 - 8.201: 98.4825% ( 2) 00:14:47.706 8.201 - 8.249: 98.4899% ( 1) 00:14:47.706 8.249 - 8.296: 98.4973% ( 1) 00:14:47.706 8.391 - 8.439: 98.5047% ( 1) 00:14:47.706 8.533 - 8.581: 98.5121% ( 1) 00:14:47.706 8.628 - 8.676: 98.5343% ( 3) 00:14:47.706 8.676 - 8.723: 98.5565% ( 3) 00:14:47.706 8.723 - 8.770: 98.5713% ( 2) 00:14:47.706 8.770 - 8.818: 98.5861% ( 2) 00:14:47.706 8.865 - 8.913: 98.5935% ( 1) 00:14:47.706 8.913 - 8.960: 98.6157% ( 3) 00:14:47.706 8.960 - 9.007: 98.6379% ( 3) 00:14:47.706 9.055 - 9.102: 98.6528% ( 2) 00:14:47.706 9.102 - 9.150: 98.6602% ( 1) 00:14:47.706 9.150 - 9.197: 98.6824% ( 3) 00:14:47.706 9.244 - 9.292: 98.6972% ( 2) 00:14:47.706 9.292 - 9.339: 98.7120% ( 2) 00:14:47.706 9.387 - 9.434: 98.7194% ( 1) 00:14:47.706 9.481 - 9.529: 98.7268% ( 1) 00:14:47.706 9.529 - 9.576: 98.7342% ( 1) 00:14:47.706 9.576 - 9.624: 98.7416% ( 1) 00:14:47.706 9.624 - 9.671: 98.7564% ( 2) 00:14:47.706 9.766 - 9.813: 98.7712% ( 2) 00:14:47.706 9.861 - 9.908: 98.7786% ( 1) 00:14:47.706 9.908 - 9.956: 98.7860% ( 1) 00:14:47.706 9.956 - 10.003: 98.7934% ( 1) 00:14:47.706 10.003 - 10.050: 98.8008% ( 1) 00:14:47.706 10.193 - 10.240: 98.8082% ( 1) 00:14:47.706 10.382 - 10.430: 98.8156% ( 1) 00:14:47.706 10.524 - 10.572: 98.8230% ( 1) 00:14:47.706 10.714 - 10.761: 98.8378% ( 2) 00:14:47.706 10.761 - 10.809: 98.8452% ( 1) 00:14:47.706 11.188 - 11.236: 98.8526% ( 1) 00:14:47.706 11.236 - 11.283: 98.8674% ( 2) 00:14:47.706 11.378 - 11.425: 98.8748% ( 1) 00:14:47.706 11.473 - 11.520: 98.8822% ( 1) 00:14:47.706 11.567 - 11.615: 98.8970% ( 2) 00:14:47.706 11.852 - 11.899: 98.9044% ( 1) 00:14:47.706 12.041 - 12.089: 98.9118% ( 1) 00:14:47.706 12.705 - 12.800: 98.9192% ( 1) 00:14:47.706 12.990 - 13.084: 98.9266% ( 1) 00:14:47.706 13.084 - 13.179: 98.9488% ( 3) 00:14:47.706 14.507 - 14.601: 98.9563% ( 1) 00:14:47.706 14.791 - 14.886: 98.9637% ( 1) 00:14:47.706 14.886 - 14.981: 98.9711% ( 1) 00:14:47.706 16.024 - 16.119: 98.9785% ( 1) 00:14:47.706 16.972 - 17.067: 98.9859% ( 1) 00:14:47.706 17.256 - 17.351: 98.9933% ( 1) 00:14:47.706 17.351 - 17.446: 99.0229% ( 4) 00:14:47.706 17.446 - 17.541: 99.0895% ( 9) 00:14:47.706 17.541 - 17.636: 99.1413% ( 7) 00:14:47.706 17.636 - 17.730: 99.1635% ( 3) 00:14:47.706 17.730 - 17.825: 99.2005% ( 5) 00:14:47.706 17.825 - 17.920: 99.2524% ( 7) 00:14:47.706 17.920 - 18.015: 99.3560% ( 14) 00:14:47.706 18.015 - 18.110: 99.4078% ( 7) 00:14:47.706 18.110 - 18.204: 99.5114% ( 14) 00:14:47.706 18.204 - 18.299: 99.5484% ( 5) 00:14:47.706 18.299 - 18.394: 99.6151% ( 9) 00:14:47.706 18.394 - 18.489: 99.6521% ( 5) 00:14:47.706 18.489 - 18.584: 99.6669% ( 2) 00:14:47.706 18.584 - 18.679: 99.7039% ( 5) 00:14:47.706 18.679 - 18.773: 99.7483% ( 6) 00:14:47.706 18.773 - 18.868: 99.7779% ( 4) 00:14:47.706 18.868 - 18.963: 99.8075% ( 4) 00:14:47.706 18.963 - 19.058: 99.8223% ( 2) 00:14:47.706 19.153 - 19.247: 99.8371% ( 2) 00:14:47.706 19.247 - 19.342: 99.8520% ( 2) 00:14:47.706 19.532 - 19.627: 99.8594% ( 1) 00:14:47.706 19.627 - 19.721: 99.8668% ( 1) 00:14:47.706 19.816 - 19.911: 99.8742% ( 1) 00:14:47.706 20.196 - 20.290: 99.8816% ( 1) 00:14:47.706 20.480 - 20.575: 99.8890% ( 1) 00:14:47.706 21.239 - 21.333: 99.8964% ( 1) 00:14:47.706 24.178 - 24.273: 99.9038% ( 1) 00:14:47.706 3640.889 - 3665.161: 99.9112% ( 1) 00:14:47.706 3980.705 - 4004.978: 100.0000% ( 12) 00:14:47.706 00:14:47.706 Complete histogram 00:14:47.706 ================== 00:14:47.706 Range in us Cumulative Count 00:14:47.706 2.050 - 2.062: 0.0148% ( 2) 00:14:47.706 2.062 - 2.074: 19.2020% ( 2592) 00:14:47.706 2.074 - 2.086: 37.4047% ( 2459) 00:14:47.706 2.086 - 2.098: 39.4182% ( 272) 00:14:47.706 2.098 - 2.110: 54.0603% ( 1978) 00:14:47.706 2.110 - 2.121: 59.5381% ( 740) 00:14:47.706 2.121 - 2.133: 61.2481% ( 231) 00:14:47.706 2.133 - 2.145: 70.9897% ( 1316) 00:14:47.706 2.145 - 2.157: 74.8390% ( 520) 00:14:47.706 2.157 - 2.169: 76.2603% ( 192) 00:14:47.706 2.169 - 2.181: 80.6351% ( 591) 00:14:47.706 2.181 - 2.193: 82.0194% ( 187) 00:14:47.706 2.193 - 2.204: 82.8485% ( 112) 00:14:47.706 2.204 - 2.216: 86.6681% ( 516) 00:14:47.706 2.216 - 2.228: 89.3330% ( 360) 00:14:47.706 2.228 - 2.240: 90.8950% ( 211) 00:14:47.706 2.240 - 2.252: 93.0121% ( 286) 00:14:47.706 2.252 - 2.264: 93.7079% ( 94) 00:14:47.706 2.264 - 2.276: 93.9966% ( 39) 00:14:47.706 2.276 - 2.287: 94.3667% ( 50) 00:14:47.706 2.287 - 2.299: 95.1440% ( 105) 00:14:47.706 2.299 - 2.311: 95.7140% ( 77) 00:14:47.706 2.311 - 2.323: 95.8472% ( 18) 00:14:47.706 2.323 - 2.335: 95.8990% ( 7) 00:14:47.706 2.335 - 2.347: 95.9360% ( 5) 00:14:47.706 2.347 - 2.359: 96.0323% ( 13) 00:14:47.706 2.359 - 2.370: 96.2618% ( 31) 00:14:47.706 2.370 - 2.382: 96.5208% ( 35) 00:14:47.706 2.382 - 2.394: 96.9132% ( 53) 00:14:47.706 2.394 - 2.406: 97.1723% ( 35) 00:14:47.706 2.406 - 2.418: 97.3277% ( 21) 00:14:47.706 2.418 - 2.430: 97.4684% ( 19) 00:14:47.706 2.430 - 2.441: 97.6460% ( 24) 00:14:47.706 2.441 - 2.453: 97.7422% ( 13) 00:14:47.706 2.453 - 2.465: 97.8607% ( 16) 00:14:47.706 2.465 - 2.477: 97.9791% ( 16) 00:14:47.706 2.477 - 2.489: 98.0680% ( 12) 00:14:47.706 2.489 - 2.501: 98.1568% ( 12) 00:14:47.706 2.501 - 2.513: 98.2234% ( 9) 00:14:47.706 2.513 - 2.524: 98.2456% ( 3) 00:14:47.706 2.524 - 2.536: 98.2974% ( 7) 00:14:47.706 2.536 - 2.548: 98.3418% ( 6) 00:14:47.706 2.548 - 2.560: 98.3715% ( 4) 00:14:47.706 2.560 - 2.572: 98.4011% ( 4) 00:14:47.706 2.572 - 2.584: 98.4233% ( 3) 00:14:47.706 2.596 - 2.607: 98.4307% ( 1) 00:14:47.706 2.607 - 2.619: 98.4381% ( 1) 00:14:47.706 2.631 - 2.643: 98.4603% ( 3) 00:14:47.706 2.643 - 2.655: 98.4677% ( 1) 00:14:47.706 2.667 - 2.679: 98.4751% ( 1) 00:14:47.706 2.690 - 2.702: 98.4825% ( 1) 00:14:47.706 2.714 - 2.726: 98.4899% ( 1) 00:14:47.706 2.738 - 2.750: 98.4973% ( 1) 00:14:47.706 2.750 - 2.761: 98.5047% ( 1) 00:14:47.706 2.880 - 2.892: 98.5121% ( 1) 00:14:47.706 3.295 - 3.319: 98.5269% ( 2) 00:14:47.706 3.342 - 3.366: 98.5343% ( 1) 00:14:47.706 3.366 - 3.390: 98.5417% ( 1) 00:14:47.706 3.413 - 3.437: 98.5639% ( 3) 00:14:47.706 3.437 - 3.461: 98.5713% ( 1) 00:14:47.706 3.508 - 3.532: 98.5861% ( 2) 00:14:47.706 3.532 - 3.556: 98.6083% ( 3) 00:14:47.706 3.579 - 3.603: 98.6305% ( 3) 00:14:47.706 3.627 - 3.650: 98.6379% ( 1) 00:14:47.706 3.650 - 3.674: 98.6453% ( 1) 00:14:47.706 3.698 - 3.721: 98.6528% ( 1) 00:14:47.706 3.769 - 3.793: 98.6602% ( 1) 00:14:47.706 3.840 - 3.864: 98.6676% ( 1) 00:14:47.706 3.887 - 3.911: 98.6750% ( 1) 00:14:47.706 3.911 - 3.935: 98.6824% ( 1) 00:14:47.706 3.935 - 3.959: 98.6898% ( 1) 00:14:47.706 3.959 - 3.982: 98.6972% ( 1) 00:14:47.706 3.982 - 4.006: 98.7046% ( 1) 00:14:47.706 5.618 - 5.641: 98.7120% ( 1) 00:14:47.706 5.736 - 5.760: 98.7194% ( 1) 00:14:47.706 6.637 - 6.684: 98.7268% ( 1) 00:14:47.706 6.732 - 6.779: 98.7342% ( 1) 00:14:47.706 6.779 - 6.827: 98.7416% ( 1) 00:14:47.706 6.827 - 6.874: 98.7564% ( 2) 00:14:47.706 6.969 - 7.016: 98.7638% ( 1) 00:14:47.706 7.064 - 7.111: 9[2024-07-12 13:22:45.014973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.706 8.7712% ( 1) 00:14:47.706 7.159 - 7.206: 98.7786% ( 1) 00:14:47.706 7.206 - 7.253: 98.7934% ( 2) 00:14:47.706 7.443 - 7.490: 98.8008% ( 1) 00:14:47.706 8.391 - 8.439: 98.8082% ( 1) 00:14:47.706 10.050 - 10.098: 98.8156% ( 1) 00:14:47.706 15.550 - 15.644: 98.8378% ( 3) 00:14:47.706 15.644 - 15.739: 98.8452% ( 1) 00:14:47.706 15.739 - 15.834: 98.8600% ( 2) 00:14:47.706 15.834 - 15.929: 98.8674% ( 1) 00:14:47.706 15.929 - 16.024: 98.8748% ( 1) 00:14:47.706 16.024 - 16.119: 98.8896% ( 2) 00:14:47.706 16.119 - 16.213: 98.9340% ( 6) 00:14:47.706 16.213 - 16.308: 98.9637% ( 4) 00:14:47.706 16.308 - 16.403: 98.9859% ( 3) 00:14:47.706 16.403 - 16.498: 99.0081% ( 3) 00:14:47.706 16.498 - 16.593: 99.0599% ( 7) 00:14:47.706 16.593 - 16.687: 99.1043% ( 6) 00:14:47.706 16.687 - 16.782: 99.1931% ( 12) 00:14:47.706 16.782 - 16.877: 99.2301% ( 5) 00:14:47.706 16.877 - 16.972: 99.2375% ( 1) 00:14:47.706 17.067 - 17.161: 99.2598% ( 3) 00:14:47.706 17.161 - 17.256: 99.2894% ( 4) 00:14:47.706 17.256 - 17.351: 99.3116% ( 3) 00:14:47.707 17.351 - 17.446: 99.3264% ( 2) 00:14:47.707 17.446 - 17.541: 99.3412% ( 2) 00:14:47.707 17.730 - 17.825: 99.3486% ( 1) 00:14:47.707 17.825 - 17.920: 99.3560% ( 1) 00:14:47.707 18.015 - 18.110: 99.3634% ( 1) 00:14:47.707 18.110 - 18.204: 99.3782% ( 2) 00:14:47.707 18.204 - 18.299: 99.3856% ( 1) 00:14:47.707 18.679 - 18.773: 99.3930% ( 1) 00:14:47.707 18.963 - 19.058: 99.4004% ( 1) 00:14:47.707 19.153 - 19.247: 99.4078% ( 1) 00:14:47.707 19.911 - 20.006: 99.4152% ( 1) 00:14:47.707 21.144 - 21.239: 99.4226% ( 1) 00:14:47.707 28.634 - 28.824: 99.4300% ( 1) 00:14:47.707 33.754 - 33.944: 99.4374% ( 1) 00:14:47.707 3980.705 - 4004.978: 99.8890% ( 61) 00:14:47.707 4004.978 - 4029.250: 100.0000% ( 15) 00:14:47.707 00:14:47.707 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:47.707 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:47.707 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:47.707 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:47.707 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:47.964 [ 00:14:47.964 { 00:14:47.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:47.964 "subtype": "Discovery", 00:14:47.964 "listen_addresses": [], 00:14:47.964 "allow_any_host": true, 00:14:47.964 "hosts": [] 00:14:47.964 }, 00:14:47.964 { 00:14:47.964 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:47.964 "subtype": "NVMe", 00:14:47.964 "listen_addresses": [ 00:14:47.964 { 00:14:47.964 "trtype": "VFIOUSER", 00:14:47.964 "adrfam": "IPv4", 00:14:47.964 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:47.964 "trsvcid": "0" 00:14:47.964 } 00:14:47.964 ], 00:14:47.964 "allow_any_host": true, 00:14:47.964 "hosts": [], 00:14:47.964 "serial_number": "SPDK1", 00:14:47.964 "model_number": "SPDK bdev Controller", 00:14:47.964 "max_namespaces": 32, 00:14:47.964 "min_cntlid": 1, 00:14:47.964 "max_cntlid": 65519, 00:14:47.964 "namespaces": [ 00:14:47.964 { 00:14:47.964 "nsid": 1, 00:14:47.964 "bdev_name": "Malloc1", 00:14:47.964 "name": "Malloc1", 00:14:47.964 "nguid": "AE32CB81A0D34A61BB151848F6224B2E", 00:14:47.964 "uuid": "ae32cb81-a0d3-4a61-bb15-1848f6224b2e" 00:14:47.964 } 00:14:47.964 ] 00:14:47.964 }, 00:14:47.964 { 00:14:47.964 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:47.964 "subtype": "NVMe", 00:14:47.964 "listen_addresses": [ 00:14:47.964 { 00:14:47.964 "trtype": "VFIOUSER", 00:14:47.964 "adrfam": "IPv4", 00:14:47.964 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:47.964 "trsvcid": "0" 00:14:47.964 } 00:14:47.964 ], 00:14:47.964 "allow_any_host": true, 00:14:47.964 "hosts": [], 00:14:47.964 "serial_number": "SPDK2", 00:14:47.964 "model_number": "SPDK bdev Controller", 00:14:47.964 "max_namespaces": 32, 00:14:47.964 "min_cntlid": 1, 00:14:47.964 "max_cntlid": 65519, 00:14:47.964 "namespaces": [ 00:14:47.964 { 00:14:47.964 "nsid": 1, 00:14:47.964 "bdev_name": "Malloc2", 00:14:47.964 "name": "Malloc2", 00:14:47.964 "nguid": "DF0B349EF8D84FA693DB5309C6809487", 00:14:47.964 "uuid": "df0b349e-f8d8-4fa6-93db-5309c6809487" 00:14:47.964 } 00:14:47.964 ] 00:14:47.964 } 00:14:47.964 ] 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3539096 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:47.964 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:47.964 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.221 [2024-07-12 13:22:45.491810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.221 Malloc3 00:14:48.221 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:48.479 [2024-07-12 13:22:45.893652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.479 13:22:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:48.479 Asynchronous Event Request test 00:14:48.479 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.479 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.479 Registering asynchronous event callbacks... 00:14:48.479 Starting namespace attribute notice tests for all controllers... 00:14:48.479 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:48.479 aer_cb - Changed Namespace 00:14:48.479 Cleaning up... 00:14:48.737 [ 00:14:48.737 { 00:14:48.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:48.737 "subtype": "Discovery", 00:14:48.737 "listen_addresses": [], 00:14:48.737 "allow_any_host": true, 00:14:48.737 "hosts": [] 00:14:48.737 }, 00:14:48.737 { 00:14:48.737 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:48.737 "subtype": "NVMe", 00:14:48.737 "listen_addresses": [ 00:14:48.737 { 00:14:48.737 "trtype": "VFIOUSER", 00:14:48.737 "adrfam": "IPv4", 00:14:48.737 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:48.737 "trsvcid": "0" 00:14:48.737 } 00:14:48.737 ], 00:14:48.737 "allow_any_host": true, 00:14:48.737 "hosts": [], 00:14:48.737 "serial_number": "SPDK1", 00:14:48.737 "model_number": "SPDK bdev Controller", 00:14:48.737 "max_namespaces": 32, 00:14:48.737 "min_cntlid": 1, 00:14:48.737 "max_cntlid": 65519, 00:14:48.737 "namespaces": [ 00:14:48.737 { 00:14:48.737 "nsid": 1, 00:14:48.737 "bdev_name": "Malloc1", 00:14:48.737 "name": "Malloc1", 00:14:48.737 "nguid": "AE32CB81A0D34A61BB151848F6224B2E", 00:14:48.737 "uuid": "ae32cb81-a0d3-4a61-bb15-1848f6224b2e" 00:14:48.737 }, 00:14:48.737 { 00:14:48.737 "nsid": 2, 00:14:48.737 "bdev_name": "Malloc3", 00:14:48.737 "name": "Malloc3", 00:14:48.737 "nguid": "DE8FEAF8727247D1A0FAFCD57D9F0FFD", 00:14:48.737 "uuid": "de8feaf8-7272-47d1-a0fa-fcd57d9f0ffd" 00:14:48.737 } 00:14:48.737 ] 00:14:48.737 }, 00:14:48.737 { 00:14:48.737 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:48.737 "subtype": "NVMe", 00:14:48.737 "listen_addresses": [ 00:14:48.737 { 00:14:48.737 "trtype": "VFIOUSER", 00:14:48.737 "adrfam": "IPv4", 00:14:48.737 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:48.737 "trsvcid": "0" 00:14:48.737 } 00:14:48.737 ], 00:14:48.737 "allow_any_host": true, 00:14:48.737 "hosts": [], 00:14:48.737 "serial_number": "SPDK2", 00:14:48.737 "model_number": "SPDK bdev Controller", 00:14:48.737 "max_namespaces": 32, 00:14:48.737 "min_cntlid": 1, 00:14:48.737 "max_cntlid": 65519, 00:14:48.737 "namespaces": [ 00:14:48.737 { 00:14:48.737 "nsid": 1, 00:14:48.737 "bdev_name": "Malloc2", 00:14:48.737 "name": "Malloc2", 00:14:48.737 "nguid": "DF0B349EF8D84FA693DB5309C6809487", 00:14:48.737 "uuid": "df0b349e-f8d8-4fa6-93db-5309c6809487" 00:14:48.737 } 00:14:48.737 ] 00:14:48.737 } 00:14:48.737 ] 00:14:48.737 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3539096 00:14:48.737 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.737 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:48.737 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:48.737 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:48.737 [2024-07-12 13:22:46.195224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:14:48.738 [2024-07-12 13:22:46.195264] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539114 ] 00:14:48.738 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.997 [2024-07-12 13:22:46.210836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:48.997 [2024-07-12 13:22:46.228533] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:48.997 [2024-07-12 13:22:46.237648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.997 [2024-07-12 13:22:46.237696] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa4d3c00000 00:14:48.997 [2024-07-12 13:22:46.238636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.239660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.240670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.241691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.242682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.243683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.244708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.245702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.997 [2024-07-12 13:22:46.246709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.997 [2024-07-12 13:22:46.246730] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa4d29c2000 00:14:48.997 [2024-07-12 13:22:46.247842] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:48.997 [2024-07-12 13:22:46.263066] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:48.997 [2024-07-12 13:22:46.263102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:48.997 [2024-07-12 13:22:46.268216] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:48.997 [2024-07-12 13:22:46.268265] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:48.997 [2024-07-12 13:22:46.268368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:48.997 [2024-07-12 13:22:46.268392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:48.997 [2024-07-12 13:22:46.268402] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:48.997 [2024-07-12 13:22:46.269217] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:48.997 [2024-07-12 13:22:46.269237] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:48.998 [2024-07-12 13:22:46.269249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:48.998 [2024-07-12 13:22:46.270225] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:48.998 [2024-07-12 13:22:46.270245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:48.998 [2024-07-12 13:22:46.270258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.271234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:48.998 [2024-07-12 13:22:46.271261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.272236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:48.998 [2024-07-12 13:22:46.272256] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:48.998 [2024-07-12 13:22:46.272265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.272276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.272386] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:48.998 [2024-07-12 13:22:46.272396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.272405] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:48.998 [2024-07-12 13:22:46.273239] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:48.998 [2024-07-12 13:22:46.274245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:48.998 [2024-07-12 13:22:46.275260] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:48.998 [2024-07-12 13:22:46.276252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.998 [2024-07-12 13:22:46.276340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:48.998 [2024-07-12 13:22:46.277264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:48.998 [2024-07-12 13:22:46.277283] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:48.998 [2024-07-12 13:22:46.277308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.277339] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:48.998 [2024-07-12 13:22:46.277358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.277377] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.998 [2024-07-12 13:22:46.277387] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.998 [2024-07-12 13:22:46.277405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.285331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.285353] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:48.998 [2024-07-12 13:22:46.285367] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:48.998 [2024-07-12 13:22:46.285375] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:48.998 [2024-07-12 13:22:46.285386] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:48.998 [2024-07-12 13:22:46.285395] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:48.998 [2024-07-12 13:22:46.285402] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:48.998 [2024-07-12 13:22:46.285411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.285423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.285439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.293338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.293382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.998 [2024-07-12 13:22:46.293397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.998 [2024-07-12 13:22:46.293410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.998 [2024-07-12 13:22:46.293422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.998 [2024-07-12 13:22:46.293431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.293446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.293460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.301327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.301346] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:48.998 [2024-07-12 13:22:46.301377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.301390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.301400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.301414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.309330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.309403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.309418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.309431] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:48.998 [2024-07-12 13:22:46.309443] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:48.998 [2024-07-12 13:22:46.309454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.317326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.317350] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:48.998 [2024-07-12 13:22:46.317366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.317381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.317394] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.998 [2024-07-12 13:22:46.317402] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.998 [2024-07-12 13:22:46.317412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.998 [2024-07-12 13:22:46.325326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:48.998 [2024-07-12 13:22:46.325354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.325386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:48.998 [2024-07-12 13:22:46.325400] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.998 [2024-07-12 13:22:46.325408] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.998 [2024-07-12 13:22:46.325418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.333329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.333350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333426] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:48.999 [2024-07-12 13:22:46.333435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:48.999 [2024-07-12 13:22:46.333443] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:48.999 [2024-07-12 13:22:46.333468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.341367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.349343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.349369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.357331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.357357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.365331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.365367] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:48.999 [2024-07-12 13:22:46.365378] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:48.999 [2024-07-12 13:22:46.365384] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:48.999 [2024-07-12 13:22:46.365390] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:48.999 [2024-07-12 13:22:46.365399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:48.999 [2024-07-12 13:22:46.365411] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:48.999 [2024-07-12 13:22:46.365419] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:48.999 [2024-07-12 13:22:46.365427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.365438] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:48.999 [2024-07-12 13:22:46.365446] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.999 [2024-07-12 13:22:46.365454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.365465] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:48.999 [2024-07-12 13:22:46.365473] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:48.999 [2024-07-12 13:22:46.365482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:48.999 [2024-07-12 13:22:46.373332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.373359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.373376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:48.999 [2024-07-12 13:22:46.373388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:48.999 ===================================================== 00:14:48.999 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:48.999 ===================================================== 00:14:48.999 Controller Capabilities/Features 00:14:48.999 ================================ 00:14:48.999 Vendor ID: 4e58 00:14:48.999 Subsystem Vendor ID: 4e58 00:14:48.999 Serial Number: SPDK2 00:14:48.999 Model Number: SPDK bdev Controller 00:14:48.999 Firmware Version: 24.09 00:14:48.999 Recommended Arb Burst: 6 00:14:48.999 IEEE OUI Identifier: 8d 6b 50 00:14:48.999 Multi-path I/O 00:14:48.999 May have multiple subsystem ports: Yes 00:14:48.999 May have multiple controllers: Yes 00:14:48.999 Associated with SR-IOV VF: No 00:14:48.999 Max Data Transfer Size: 131072 00:14:48.999 Max Number of Namespaces: 32 00:14:48.999 Max Number of I/O Queues: 127 00:14:48.999 NVMe Specification Version (VS): 1.3 00:14:48.999 NVMe Specification Version (Identify): 1.3 00:14:48.999 Maximum Queue Entries: 256 00:14:48.999 Contiguous Queues Required: Yes 00:14:48.999 Arbitration Mechanisms Supported 00:14:48.999 Weighted Round Robin: Not Supported 00:14:48.999 Vendor Specific: Not Supported 00:14:48.999 Reset Timeout: 15000 ms 00:14:48.999 Doorbell Stride: 4 bytes 00:14:48.999 NVM Subsystem Reset: Not Supported 00:14:48.999 Command Sets Supported 00:14:48.999 NVM Command Set: Supported 00:14:48.999 Boot Partition: Not Supported 00:14:48.999 Memory Page Size Minimum: 4096 bytes 00:14:48.999 Memory Page Size Maximum: 4096 bytes 00:14:48.999 Persistent Memory Region: Not Supported 00:14:48.999 Optional Asynchronous Events Supported 00:14:48.999 Namespace Attribute Notices: Supported 00:14:48.999 Firmware Activation Notices: Not Supported 00:14:48.999 ANA Change Notices: Not Supported 00:14:48.999 PLE Aggregate Log Change Notices: Not Supported 00:14:48.999 LBA Status Info Alert Notices: Not Supported 00:14:48.999 EGE Aggregate Log Change Notices: Not Supported 00:14:48.999 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.999 Zone Descriptor Change Notices: Not Supported 00:14:48.999 Discovery Log Change Notices: Not Supported 00:14:48.999 Controller Attributes 00:14:48.999 128-bit Host Identifier: Supported 00:14:48.999 Non-Operational Permissive Mode: Not Supported 00:14:48.999 NVM Sets: Not Supported 00:14:48.999 Read Recovery Levels: Not Supported 00:14:48.999 Endurance Groups: Not Supported 00:14:48.999 Predictable Latency Mode: Not Supported 00:14:48.999 Traffic Based Keep ALive: Not Supported 00:14:48.999 Namespace Granularity: Not Supported 00:14:48.999 SQ Associations: Not Supported 00:14:48.999 UUID List: Not Supported 00:14:48.999 Multi-Domain Subsystem: Not Supported 00:14:49.000 Fixed Capacity Management: Not Supported 00:14:49.000 Variable Capacity Management: Not Supported 00:14:49.000 Delete Endurance Group: Not Supported 00:14:49.000 Delete NVM Set: Not Supported 00:14:49.000 Extended LBA Formats Supported: Not Supported 00:14:49.000 Flexible Data Placement Supported: Not Supported 00:14:49.000 00:14:49.000 Controller Memory Buffer Support 00:14:49.000 ================================ 00:14:49.000 Supported: No 00:14:49.000 00:14:49.000 Persistent Memory Region Support 00:14:49.000 ================================ 00:14:49.000 Supported: No 00:14:49.000 00:14:49.000 Admin Command Set Attributes 00:14:49.000 ============================ 00:14:49.000 Security Send/Receive: Not Supported 00:14:49.000 Format NVM: Not Supported 00:14:49.000 Firmware Activate/Download: Not Supported 00:14:49.000 Namespace Management: Not Supported 00:14:49.000 Device Self-Test: Not Supported 00:14:49.000 Directives: Not Supported 00:14:49.000 NVMe-MI: Not Supported 00:14:49.000 Virtualization Management: Not Supported 00:14:49.000 Doorbell Buffer Config: Not Supported 00:14:49.000 Get LBA Status Capability: Not Supported 00:14:49.000 Command & Feature Lockdown Capability: Not Supported 00:14:49.000 Abort Command Limit: 4 00:14:49.000 Async Event Request Limit: 4 00:14:49.000 Number of Firmware Slots: N/A 00:14:49.000 Firmware Slot 1 Read-Only: N/A 00:14:49.000 Firmware Activation Without Reset: N/A 00:14:49.000 Multiple Update Detection Support: N/A 00:14:49.000 Firmware Update Granularity: No Information Provided 00:14:49.000 Per-Namespace SMART Log: No 00:14:49.000 Asymmetric Namespace Access Log Page: Not Supported 00:14:49.000 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:49.000 Command Effects Log Page: Supported 00:14:49.000 Get Log Page Extended Data: Supported 00:14:49.000 Telemetry Log Pages: Not Supported 00:14:49.000 Persistent Event Log Pages: Not Supported 00:14:49.000 Supported Log Pages Log Page: May Support 00:14:49.000 Commands Supported & Effects Log Page: Not Supported 00:14:49.000 Feature Identifiers & Effects Log Page:May Support 00:14:49.000 NVMe-MI Commands & Effects Log Page: May Support 00:14:49.000 Data Area 4 for Telemetry Log: Not Supported 00:14:49.000 Error Log Page Entries Supported: 128 00:14:49.000 Keep Alive: Supported 00:14:49.000 Keep Alive Granularity: 10000 ms 00:14:49.000 00:14:49.000 NVM Command Set Attributes 00:14:49.000 ========================== 00:14:49.000 Submission Queue Entry Size 00:14:49.000 Max: 64 00:14:49.000 Min: 64 00:14:49.000 Completion Queue Entry Size 00:14:49.000 Max: 16 00:14:49.000 Min: 16 00:14:49.000 Number of Namespaces: 32 00:14:49.000 Compare Command: Supported 00:14:49.000 Write Uncorrectable Command: Not Supported 00:14:49.000 Dataset Management Command: Supported 00:14:49.000 Write Zeroes Command: Supported 00:14:49.000 Set Features Save Field: Not Supported 00:14:49.000 Reservations: Not Supported 00:14:49.000 Timestamp: Not Supported 00:14:49.000 Copy: Supported 00:14:49.000 Volatile Write Cache: Present 00:14:49.000 Atomic Write Unit (Normal): 1 00:14:49.000 Atomic Write Unit (PFail): 1 00:14:49.000 Atomic Compare & Write Unit: 1 00:14:49.000 Fused Compare & Write: Supported 00:14:49.000 Scatter-Gather List 00:14:49.000 SGL Command Set: Supported (Dword aligned) 00:14:49.000 SGL Keyed: Not Supported 00:14:49.000 SGL Bit Bucket Descriptor: Not Supported 00:14:49.000 SGL Metadata Pointer: Not Supported 00:14:49.000 Oversized SGL: Not Supported 00:14:49.000 SGL Metadata Address: Not Supported 00:14:49.000 SGL Offset: Not Supported 00:14:49.000 Transport SGL Data Block: Not Supported 00:14:49.000 Replay Protected Memory Block: Not Supported 00:14:49.000 00:14:49.000 Firmware Slot Information 00:14:49.000 ========================= 00:14:49.000 Active slot: 1 00:14:49.000 Slot 1 Firmware Revision: 24.09 00:14:49.000 00:14:49.000 00:14:49.000 Commands Supported and Effects 00:14:49.000 ============================== 00:14:49.000 Admin Commands 00:14:49.000 -------------- 00:14:49.000 Get Log Page (02h): Supported 00:14:49.000 Identify (06h): Supported 00:14:49.000 Abort (08h): Supported 00:14:49.000 Set Features (09h): Supported 00:14:49.000 Get Features (0Ah): Supported 00:14:49.000 Asynchronous Event Request (0Ch): Supported 00:14:49.000 Keep Alive (18h): Supported 00:14:49.000 I/O Commands 00:14:49.000 ------------ 00:14:49.000 Flush (00h): Supported LBA-Change 00:14:49.000 Write (01h): Supported LBA-Change 00:14:49.000 Read (02h): Supported 00:14:49.000 Compare (05h): Supported 00:14:49.000 Write Zeroes (08h): Supported LBA-Change 00:14:49.000 Dataset Management (09h): Supported LBA-Change 00:14:49.000 Copy (19h): Supported LBA-Change 00:14:49.000 00:14:49.000 Error Log 00:14:49.000 ========= 00:14:49.000 00:14:49.000 Arbitration 00:14:49.000 =========== 00:14:49.000 Arbitration Burst: 1 00:14:49.000 00:14:49.000 Power Management 00:14:49.000 ================ 00:14:49.000 Number of Power States: 1 00:14:49.000 Current Power State: Power State #0 00:14:49.000 Power State #0: 00:14:49.000 Max Power: 0.00 W 00:14:49.000 Non-Operational State: Operational 00:14:49.000 Entry Latency: Not Reported 00:14:49.000 Exit Latency: Not Reported 00:14:49.000 Relative Read Throughput: 0 00:14:49.000 Relative Read Latency: 0 00:14:49.000 Relative Write Throughput: 0 00:14:49.000 Relative Write Latency: 0 00:14:49.000 Idle Power: Not Reported 00:14:49.000 Active Power: Not Reported 00:14:49.000 Non-Operational Permissive Mode: Not Supported 00:14:49.000 00:14:49.000 Health Information 00:14:49.000 ================== 00:14:49.000 Critical Warnings: 00:14:49.000 Available Spare Space: OK 00:14:49.000 Temperature: OK 00:14:49.000 Device Reliability: OK 00:14:49.000 Read Only: No 00:14:49.000 Volatile Memory Backup: OK 00:14:49.000 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:49.000 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:49.000 Available Spare: 0% 00:14:49.001 Available Sp[2024-07-12 13:22:46.373498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:49.001 [2024-07-12 13:22:46.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:49.001 [2024-07-12 13:22:46.381380] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:49.001 [2024-07-12 13:22:46.381401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.001 [2024-07-12 13:22:46.381413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.001 [2024-07-12 13:22:46.381423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.001 [2024-07-12 13:22:46.381432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.001 [2024-07-12 13:22:46.381512] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:49.001 [2024-07-12 13:22:46.381533] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:49.001 [2024-07-12 13:22:46.382520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.001 [2024-07-12 13:22:46.382615] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:49.001 [2024-07-12 13:22:46.382644] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:49.001 [2024-07-12 13:22:46.383544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:49.001 [2024-07-12 13:22:46.383569] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:49.001 [2024-07-12 13:22:46.383635] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:49.001 [2024-07-12 13:22:46.384856] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.001 are Threshold: 0% 00:14:49.001 Life Percentage Used: 0% 00:14:49.001 Data Units Read: 0 00:14:49.001 Data Units Written: 0 00:14:49.001 Host Read Commands: 0 00:14:49.001 Host Write Commands: 0 00:14:49.001 Controller Busy Time: 0 minutes 00:14:49.001 Power Cycles: 0 00:14:49.001 Power On Hours: 0 hours 00:14:49.001 Unsafe Shutdowns: 0 00:14:49.001 Unrecoverable Media Errors: 0 00:14:49.001 Lifetime Error Log Entries: 0 00:14:49.001 Warning Temperature Time: 0 minutes 00:14:49.001 Critical Temperature Time: 0 minutes 00:14:49.001 00:14:49.001 Number of Queues 00:14:49.001 ================ 00:14:49.001 Number of I/O Submission Queues: 127 00:14:49.001 Number of I/O Completion Queues: 127 00:14:49.001 00:14:49.001 Active Namespaces 00:14:49.001 ================= 00:14:49.001 Namespace ID:1 00:14:49.001 Error Recovery Timeout: Unlimited 00:14:49.001 Command Set Identifier: NVM (00h) 00:14:49.001 Deallocate: Supported 00:14:49.001 Deallocated/Unwritten Error: Not Supported 00:14:49.001 Deallocated Read Value: Unknown 00:14:49.001 Deallocate in Write Zeroes: Not Supported 00:14:49.001 Deallocated Guard Field: 0xFFFF 00:14:49.001 Flush: Supported 00:14:49.001 Reservation: Supported 00:14:49.001 Namespace Sharing Capabilities: Multiple Controllers 00:14:49.001 Size (in LBAs): 131072 (0GiB) 00:14:49.001 Capacity (in LBAs): 131072 (0GiB) 00:14:49.001 Utilization (in LBAs): 131072 (0GiB) 00:14:49.001 NGUID: DF0B349EF8D84FA693DB5309C6809487 00:14:49.001 UUID: df0b349e-f8d8-4fa6-93db-5309c6809487 00:14:49.001 Thin Provisioning: Not Supported 00:14:49.001 Per-NS Atomic Units: Yes 00:14:49.001 Atomic Boundary Size (Normal): 0 00:14:49.001 Atomic Boundary Size (PFail): 0 00:14:49.001 Atomic Boundary Offset: 0 00:14:49.001 Maximum Single Source Range Length: 65535 00:14:49.001 Maximum Copy Length: 65535 00:14:49.001 Maximum Source Range Count: 1 00:14:49.001 NGUID/EUI64 Never Reused: No 00:14:49.001 Namespace Write Protected: No 00:14:49.001 Number of LBA Formats: 1 00:14:49.001 Current LBA Format: LBA Format #00 00:14:49.001 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:49.001 00:14:49.001 13:22:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:49.001 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.259 [2024-07-12 13:22:46.611175] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.569 Initializing NVMe Controllers 00:14:54.569 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:54.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:54.569 Initialization complete. Launching workers. 00:14:54.569 ======================================================== 00:14:54.569 Latency(us) 00:14:54.569 Device Information : IOPS MiB/s Average min max 00:14:54.569 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33731.59 131.76 3794.38 1191.33 9608.06 00:14:54.569 ======================================================== 00:14:54.569 Total : 33731.59 131.76 3794.38 1191.33 9608.06 00:14:54.569 00:14:54.569 [2024-07-12 13:22:51.711687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.569 13:22:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:54.569 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.569 [2024-07-12 13:22:51.952445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.830 Initializing NVMe Controllers 00:14:59.830 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.830 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:59.830 Initialization complete. Launching workers. 00:14:59.830 ======================================================== 00:14:59.830 Latency(us) 00:14:59.830 Device Information : IOPS MiB/s Average min max 00:14:59.830 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31289.70 122.23 4090.11 1218.87 8991.30 00:14:59.830 ======================================================== 00:14:59.830 Total : 31289.70 122.23 4090.11 1218.87 8991.30 00:14:59.830 00:14:59.830 [2024-07-12 13:22:56.976772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.830 13:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:59.830 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.830 [2024-07-12 13:22:57.194602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.093 [2024-07-12 13:23:02.329474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.093 Initializing NVMe Controllers 00:15:05.093 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.093 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.094 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:05.094 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:05.094 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:05.094 Initialization complete. Launching workers. 00:15:05.094 Starting thread on core 2 00:15:05.094 Starting thread on core 3 00:15:05.094 Starting thread on core 1 00:15:05.094 13:23:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:05.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.352 [2024-07-12 13:23:02.640828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.634 [2024-07-12 13:23:05.714829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.634 Initializing NVMe Controllers 00:15:08.634 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.634 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.634 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:08.634 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:08.634 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:08.634 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:08.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:08.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:08.634 Initialization complete. Launching workers. 00:15:08.634 Starting thread on core 1 with urgent priority queue 00:15:08.634 Starting thread on core 2 with urgent priority queue 00:15:08.634 Starting thread on core 3 with urgent priority queue 00:15:08.634 Starting thread on core 0 with urgent priority queue 00:15:08.634 SPDK bdev Controller (SPDK2 ) core 0: 6257.67 IO/s 15.98 secs/100000 ios 00:15:08.634 SPDK bdev Controller (SPDK2 ) core 1: 4885.00 IO/s 20.47 secs/100000 ios 00:15:08.634 SPDK bdev Controller (SPDK2 ) core 2: 6762.67 IO/s 14.79 secs/100000 ios 00:15:08.634 SPDK bdev Controller (SPDK2 ) core 3: 5683.67 IO/s 17.59 secs/100000 ios 00:15:08.634 ======================================================== 00:15:08.634 00:15:08.634 13:23:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:08.634 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.634 [2024-07-12 13:23:06.010814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.634 Initializing NVMe Controllers 00:15:08.634 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.634 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.634 Namespace ID: 1 size: 0GB 00:15:08.634 Initialization complete. 00:15:08.634 INFO: using host memory buffer for IO 00:15:08.634 Hello world! 00:15:08.634 [2024-07-12 13:23:06.019866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.634 13:23:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:08.892 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.892 [2024-07-12 13:23:06.314729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.266 Initializing NVMe Controllers 00:15:10.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.266 Initialization complete. Launching workers. 00:15:10.266 submit (in ns) avg, min, max = 7377.3, 3583.3, 4004361.1 00:15:10.266 complete (in ns) avg, min, max = 25793.7, 2058.9, 4086042.2 00:15:10.266 00:15:10.266 Submit histogram 00:15:10.266 ================ 00:15:10.266 Range in us Cumulative Count 00:15:10.266 3.579 - 3.603: 1.3130% ( 171) 00:15:10.266 3.603 - 3.627: 6.8873% ( 726) 00:15:10.266 3.627 - 3.650: 16.7076% ( 1279) 00:15:10.266 3.650 - 3.674: 24.8464% ( 1060) 00:15:10.266 3.674 - 3.698: 32.2942% ( 970) 00:15:10.266 3.698 - 3.721: 40.0107% ( 1005) 00:15:10.266 3.721 - 3.745: 46.9595% ( 905) 00:15:10.266 3.745 - 3.769: 52.8870% ( 772) 00:15:10.266 3.769 - 3.793: 57.0178% ( 538) 00:15:10.266 3.793 - 3.816: 60.3348% ( 432) 00:15:10.266 3.816 - 3.840: 63.9051% ( 465) 00:15:10.266 3.840 - 3.864: 67.7749% ( 504) 00:15:10.266 3.864 - 3.887: 71.8136% ( 526) 00:15:10.266 3.887 - 3.911: 75.9905% ( 544) 00:15:10.266 3.911 - 3.935: 80.0292% ( 526) 00:15:10.266 3.935 - 3.959: 83.4076% ( 440) 00:15:10.266 3.959 - 3.982: 86.0258% ( 341) 00:15:10.266 3.982 - 4.006: 87.9837% ( 255) 00:15:10.266 4.006 - 4.030: 89.4272% ( 188) 00:15:10.266 4.030 - 4.053: 90.5943% ( 152) 00:15:10.266 4.053 - 4.077: 91.7383% ( 149) 00:15:10.266 4.077 - 4.101: 92.7595% ( 133) 00:15:10.266 4.101 - 4.124: 93.6195% ( 112) 00:15:10.266 4.124 - 4.148: 94.4487% ( 108) 00:15:10.266 4.148 - 4.172: 94.9555% ( 66) 00:15:10.266 4.172 - 4.196: 95.4162% ( 60) 00:15:10.266 4.196 - 4.219: 95.7386% ( 42) 00:15:10.266 4.219 - 4.243: 96.0381% ( 39) 00:15:10.266 4.243 - 4.267: 96.1840% ( 19) 00:15:10.266 4.267 - 4.290: 96.3759% ( 25) 00:15:10.266 4.290 - 4.314: 96.4757% ( 13) 00:15:10.266 4.314 - 4.338: 96.5756% ( 13) 00:15:10.266 4.338 - 4.361: 96.6984% ( 16) 00:15:10.266 4.361 - 4.385: 96.7829% ( 11) 00:15:10.266 4.385 - 4.409: 96.9364% ( 20) 00:15:10.266 4.409 - 4.433: 96.9979% ( 8) 00:15:10.266 4.433 - 4.456: 97.0286% ( 4) 00:15:10.266 4.456 - 4.480: 97.0670% ( 5) 00:15:10.266 4.480 - 4.504: 97.1130% ( 6) 00:15:10.266 4.504 - 4.527: 97.1207% ( 1) 00:15:10.266 4.527 - 4.551: 97.1668% ( 6) 00:15:10.266 4.551 - 4.575: 97.1821% ( 2) 00:15:10.266 4.599 - 4.622: 97.1898% ( 1) 00:15:10.266 4.622 - 4.646: 97.1975% ( 1) 00:15:10.266 4.646 - 4.670: 97.2052% ( 1) 00:15:10.266 4.693 - 4.717: 97.2205% ( 2) 00:15:10.266 4.788 - 4.812: 97.2359% ( 2) 00:15:10.266 4.812 - 4.836: 97.2512% ( 2) 00:15:10.266 4.836 - 4.859: 97.2743% ( 3) 00:15:10.266 4.859 - 4.883: 97.3127% ( 5) 00:15:10.266 4.883 - 4.907: 97.3741% ( 8) 00:15:10.266 4.907 - 4.930: 97.3894% ( 2) 00:15:10.266 4.930 - 4.954: 97.4201% ( 4) 00:15:10.266 4.954 - 4.978: 97.4816% ( 8) 00:15:10.266 4.978 - 5.001: 97.5353% ( 7) 00:15:10.266 5.001 - 5.025: 97.5891% ( 7) 00:15:10.266 5.025 - 5.049: 97.6582% ( 9) 00:15:10.266 5.049 - 5.073: 97.6966% ( 5) 00:15:10.266 5.073 - 5.096: 97.7810% ( 11) 00:15:10.266 5.096 - 5.120: 97.8271% ( 6) 00:15:10.266 5.120 - 5.144: 97.8501% ( 3) 00:15:10.266 5.144 - 5.167: 97.8655% ( 2) 00:15:10.266 5.167 - 5.191: 97.9039% ( 5) 00:15:10.266 5.191 - 5.215: 97.9192% ( 2) 00:15:10.266 5.215 - 5.239: 97.9730% ( 7) 00:15:10.266 5.239 - 5.262: 98.0037% ( 4) 00:15:10.266 5.262 - 5.286: 98.0344% ( 4) 00:15:10.266 5.286 - 5.310: 98.0498% ( 2) 00:15:10.266 5.310 - 5.333: 98.0574% ( 1) 00:15:10.266 5.381 - 5.404: 98.0651% ( 1) 00:15:10.266 5.476 - 5.499: 98.0728% ( 1) 00:15:10.266 5.523 - 5.547: 98.0881% ( 2) 00:15:10.266 5.570 - 5.594: 98.0958% ( 1) 00:15:10.266 5.594 - 5.618: 98.1035% ( 1) 00:15:10.266 5.736 - 5.760: 98.1189% ( 2) 00:15:10.266 5.997 - 6.021: 98.1265% ( 1) 00:15:10.266 6.021 - 6.044: 98.1342% ( 1) 00:15:10.266 6.068 - 6.116: 98.1419% ( 1) 00:15:10.266 6.116 - 6.163: 98.1496% ( 1) 00:15:10.266 6.258 - 6.305: 98.1572% ( 1) 00:15:10.266 6.305 - 6.353: 98.1649% ( 1) 00:15:10.266 6.353 - 6.400: 98.1726% ( 1) 00:15:10.266 6.400 - 6.447: 98.1803% ( 1) 00:15:10.266 6.495 - 6.542: 98.1880% ( 1) 00:15:10.266 6.542 - 6.590: 98.1956% ( 1) 00:15:10.266 6.874 - 6.921: 98.2033% ( 1) 00:15:10.266 6.921 - 6.969: 98.2264% ( 3) 00:15:10.266 7.111 - 7.159: 98.2340% ( 1) 00:15:10.266 7.206 - 7.253: 98.2417% ( 1) 00:15:10.266 7.301 - 7.348: 98.2494% ( 1) 00:15:10.266 7.396 - 7.443: 98.2571% ( 1) 00:15:10.266 7.585 - 7.633: 98.2647% ( 1) 00:15:10.266 7.633 - 7.680: 98.2724% ( 1) 00:15:10.266 7.680 - 7.727: 98.2801% ( 1) 00:15:10.266 7.775 - 7.822: 98.2878% ( 1) 00:15:10.266 7.822 - 7.870: 98.2955% ( 1) 00:15:10.266 7.917 - 7.964: 98.3031% ( 1) 00:15:10.266 8.012 - 8.059: 98.3108% ( 1) 00:15:10.266 8.107 - 8.154: 98.3262% ( 2) 00:15:10.266 8.154 - 8.201: 98.3338% ( 1) 00:15:10.266 8.201 - 8.249: 98.3415% ( 1) 00:15:10.266 8.296 - 8.344: 98.3492% ( 1) 00:15:10.266 8.344 - 8.391: 98.3722% ( 3) 00:15:10.266 8.391 - 8.439: 98.3876% ( 2) 00:15:10.266 8.486 - 8.533: 98.3953% ( 1) 00:15:10.266 8.533 - 8.581: 98.4029% ( 1) 00:15:10.266 8.628 - 8.676: 98.4106% ( 1) 00:15:10.266 8.676 - 8.723: 98.4183% ( 1) 00:15:10.266 8.770 - 8.818: 98.4490% ( 4) 00:15:10.266 8.818 - 8.865: 98.4644% ( 2) 00:15:10.266 8.865 - 8.913: 98.4721% ( 1) 00:15:10.266 8.913 - 8.960: 98.4797% ( 1) 00:15:10.266 9.007 - 9.055: 98.4951% ( 2) 00:15:10.266 9.102 - 9.150: 98.5028% ( 1) 00:15:10.266 9.150 - 9.197: 98.5104% ( 1) 00:15:10.266 9.197 - 9.244: 98.5335% ( 3) 00:15:10.266 9.244 - 9.292: 98.5565% ( 3) 00:15:10.266 9.292 - 9.339: 98.5642% ( 1) 00:15:10.266 9.339 - 9.387: 98.5719% ( 1) 00:15:10.266 9.387 - 9.434: 98.5795% ( 1) 00:15:10.266 9.434 - 9.481: 98.5949% ( 2) 00:15:10.266 9.481 - 9.529: 98.6026% ( 1) 00:15:10.266 9.576 - 9.624: 98.6103% ( 1) 00:15:10.266 9.624 - 9.671: 98.6179% ( 1) 00:15:10.266 9.671 - 9.719: 98.6256% ( 1) 00:15:10.266 9.908 - 9.956: 98.6410% ( 2) 00:15:10.266 9.956 - 10.003: 98.6486% ( 1) 00:15:10.266 10.003 - 10.050: 98.6563% ( 1) 00:15:10.266 10.050 - 10.098: 98.6717% ( 2) 00:15:10.266 10.193 - 10.240: 98.6794% ( 1) 00:15:10.266 10.287 - 10.335: 98.6870% ( 1) 00:15:10.266 10.524 - 10.572: 98.6947% ( 1) 00:15:10.266 10.667 - 10.714: 98.7024% ( 1) 00:15:10.266 10.714 - 10.761: 98.7101% ( 1) 00:15:10.266 10.761 - 10.809: 98.7178% ( 1) 00:15:10.266 10.904 - 10.951: 98.7254% ( 1) 00:15:10.266 10.951 - 10.999: 98.7331% ( 1) 00:15:10.266 11.046 - 11.093: 98.7408% ( 1) 00:15:10.266 11.093 - 11.141: 98.7485% ( 1) 00:15:10.266 11.330 - 11.378: 98.7561% ( 1) 00:15:10.266 11.615 - 11.662: 98.7792% ( 3) 00:15:10.266 11.662 - 11.710: 98.7869% ( 1) 00:15:10.266 11.994 - 12.041: 98.7945% ( 1) 00:15:10.266 12.041 - 12.089: 98.8099% ( 2) 00:15:10.266 12.136 - 12.231: 98.8176% ( 1) 00:15:10.266 12.231 - 12.326: 98.8252% ( 1) 00:15:10.266 12.326 - 12.421: 98.8329% ( 1) 00:15:10.266 12.421 - 12.516: 98.8406% ( 1) 00:15:10.266 12.705 - 12.800: 98.8483% ( 1) 00:15:10.266 12.800 - 12.895: 98.8560% ( 1) 00:15:10.266 12.895 - 12.990: 98.8636% ( 1) 00:15:10.266 13.084 - 13.179: 98.8713% ( 1) 00:15:10.266 13.748 - 13.843: 98.8790% ( 1) 00:15:10.266 13.938 - 14.033: 98.8867% ( 1) 00:15:10.266 14.601 - 14.696: 98.8943% ( 1) 00:15:10.266 14.696 - 14.791: 98.9020% ( 1) 00:15:10.266 14.886 - 14.981: 98.9097% ( 1) 00:15:10.266 14.981 - 15.076: 98.9174% ( 1) 00:15:10.266 15.076 - 15.170: 98.9251% ( 1) 00:15:10.266 17.161 - 17.256: 98.9327% ( 1) 00:15:10.266 17.351 - 17.446: 98.9481% ( 2) 00:15:10.266 17.446 - 17.541: 98.9711% ( 3) 00:15:10.266 17.541 - 17.636: 99.0326% ( 8) 00:15:10.266 17.636 - 17.730: 99.0709% ( 5) 00:15:10.266 17.730 - 17.825: 99.1324% ( 8) 00:15:10.266 17.825 - 17.920: 99.1938% ( 8) 00:15:10.266 17.920 - 18.015: 99.2629% ( 9) 00:15:10.266 18.015 - 18.110: 99.3013% ( 5) 00:15:10.266 18.110 - 18.204: 99.3397% ( 5) 00:15:10.266 18.204 - 18.299: 99.3857% ( 6) 00:15:10.266 18.299 - 18.394: 99.5086% ( 16) 00:15:10.266 18.394 - 18.489: 99.5854% ( 10) 00:15:10.266 18.489 - 18.584: 99.7006% ( 15) 00:15:10.266 18.584 - 18.679: 99.7236% ( 3) 00:15:10.266 18.679 - 18.773: 99.7389% ( 2) 00:15:10.266 18.773 - 18.868: 99.7697% ( 4) 00:15:10.266 18.868 - 18.963: 99.7850% ( 2) 00:15:10.266 18.963 - 19.058: 99.8157% ( 4) 00:15:10.266 19.153 - 19.247: 99.8234% ( 1) 00:15:10.266 19.247 - 19.342: 99.8388% ( 2) 00:15:10.266 19.721 - 19.816: 99.8464% ( 1) 00:15:10.266 20.196 - 20.290: 99.8541% ( 1) 00:15:10.266 20.764 - 20.859: 99.8618% ( 1) 00:15:10.267 22.471 - 22.566: 99.8695% ( 1) 00:15:10.267 23.893 - 23.988: 99.8771% ( 1) 00:15:10.267 23.988 - 24.083: 99.8848% ( 1) 00:15:10.267 24.273 - 24.462: 99.8925% ( 1) 00:15:10.267 24.841 - 25.031: 99.9002% ( 1) 00:15:10.267 28.634 - 28.824: 99.9079% ( 1) 00:15:10.267 29.203 - 29.393: 99.9155% ( 1) 00:15:10.267 3980.705 - 4004.978: 100.0000% ( 11) 00:15:10.267 00:15:10.267 Complete histogram 00:15:10.267 ================== 00:15:10.267 Range in us Cumulative Count 00:15:10.267 2.050 - 2.062: 0.3148% ( 41) 00:15:10.267 2.062 - 2.074: 32.0485% ( 4133) 00:15:10.267 2.074 - 2.086: 47.6889% ( 2037) 00:15:10.267 2.086 - 2.098: 50.3071% ( 341) 00:15:10.267 2.098 - 2.110: 58.3538% ( 1048) 00:15:10.267 2.110 - 2.121: 61.0028% ( 345) 00:15:10.267 2.121 - 2.133: 64.0894% ( 402) 00:15:10.267 2.133 - 2.145: 74.4165% ( 1345) 00:15:10.267 2.145 - 2.157: 76.8351% ( 315) 00:15:10.267 2.157 - 2.169: 78.4628% ( 212) 00:15:10.267 2.169 - 2.181: 81.3345% ( 374) 00:15:10.267 2.181 - 2.193: 82.1023% ( 100) 00:15:10.267 2.193 - 2.204: 83.2156% ( 145) 00:15:10.267 2.204 - 2.216: 87.4923% ( 557) 00:15:10.267 2.216 - 2.228: 90.1797% ( 350) 00:15:10.267 2.228 - 2.240: 91.5464% ( 178) 00:15:10.267 2.240 - 2.252: 92.9745% ( 186) 00:15:10.267 2.252 - 2.264: 93.4889% ( 67) 00:15:10.267 2.264 - 2.276: 93.8037% ( 41) 00:15:10.267 2.276 - 2.287: 94.1569% ( 46) 00:15:10.267 2.287 - 2.299: 94.9017% ( 97) 00:15:10.267 2.299 - 2.311: 95.3471% ( 58) 00:15:10.267 2.311 - 2.323: 95.4776% ( 17) 00:15:10.267 2.323 - 2.335: 95.5851% ( 14) 00:15:10.267 2.335 - 2.347: 95.6235% ( 5) 00:15:10.267 2.347 - 2.359: 95.6772% ( 7) 00:15:10.267 2.359 - 2.370: 95.9843% ( 40) 00:15:10.267 2.370 - 2.382: 96.2991% ( 41) 00:15:10.267 2.382 - 2.394: 96.6830% ( 50) 00:15:10.267 2.394 - 2.406: 96.9518% ( 35) 00:15:10.267 2.406 - 2.418: 97.1744% ( 29) 00:15:10.267 2.418 - 2.430: 97.3741% ( 26) 00:15:10.267 2.430 - 2.441: 97.5737% ( 26) 00:15:10.267 2.441 - 2.453: 97.7887% ( 28) 00:15:10.267 2.453 - 2.465: 97.9115% ( 16) 00:15:10.267 2.465 - 2.477: 98.0651% ( 20) 00:15:10.267 2.477 - 2.489: 98.1572% ( 12) 00:15:10.267 2.489 - 2.501: 98.2724% ( 15) 00:15:10.267 2.501 - 2.513: 98.3492% ( 10) 00:15:10.267 2.513 - 2.524: 98.4106% ( 8) 00:15:10.267 2.524 - 2.536: 98.4644% ( 7) 00:15:10.267 2.536 - 2.548: 98.4721% ( 1) 00:15:10.267 2.548 - 2.560: 98.4797% ( 1) 00:15:10.267 2.560 - 2.572: 98.4951% ( 2) 00:15:10.267 2.572 - 2.584: 98.5028% ( 1) 00:15:10.267 2.596 - 2.607: 98.5104% ( 1) 00:15:10.267 2.607 - 2.619: 98.5181% ( 1) 00:15:10.267 2.619 - 2.631: 98.5258% ( 1) 00:15:10.267 2.690 - 2.702: 98.5335% ( 1) 00:15:10.267 2.726 - 2.738: 98.5412% ( 1) 00:15:10.267 2.738 - 2.750: 98.5488% ( 1) 00:15:10.267 2.750 - 2.761: 98.5565% ( 1) 00:15:10.267 2.821 - 2.833: 98.5642% ( 1) 00:15:10.267 2.951 - 2.963: 98.5719% ( 1) 00:15:10.267 2.999 - 3.010: 98.5795% ( 1) 00:15:10.267 3.390 - 3.413: 98.5872% ( 1) 00:15:10.267 3.413 - 3.437: 98.5949% ( 1) 00:15:10.267 3.437 - 3.461: 98.6103% ( 2) 00:15:10.267 3.461 - 3.484: 98.6179% ( 1) 00:15:10.267 3.484 - 3.508: 98.6256% ( 1) 00:15:10.267 3.508 - 3.532: 9[2024-07-12 13:23:07.409166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.267 8.6794% ( 7) 00:15:10.267 3.532 - 3.556: 98.6947% ( 2) 00:15:10.267 3.556 - 3.579: 98.7024% ( 1) 00:15:10.267 3.579 - 3.603: 98.7178% ( 2) 00:15:10.267 3.603 - 3.627: 98.7254% ( 1) 00:15:10.267 3.650 - 3.674: 98.7331% ( 1) 00:15:10.267 3.698 - 3.721: 98.7408% ( 1) 00:15:10.267 3.769 - 3.793: 98.7485% ( 1) 00:15:10.267 3.793 - 3.816: 98.7792% ( 4) 00:15:10.267 3.840 - 3.864: 98.7869% ( 1) 00:15:10.267 3.864 - 3.887: 98.7945% ( 1) 00:15:10.267 3.911 - 3.935: 98.8022% ( 1) 00:15:10.267 4.030 - 4.053: 98.8099% ( 1) 00:15:10.267 4.172 - 4.196: 98.8252% ( 2) 00:15:10.267 4.219 - 4.243: 98.8329% ( 1) 00:15:10.267 4.551 - 4.575: 98.8406% ( 1) 00:15:10.267 4.907 - 4.930: 98.8483% ( 1) 00:15:10.267 6.305 - 6.353: 98.8636% ( 2) 00:15:10.267 6.590 - 6.637: 98.8713% ( 1) 00:15:10.267 6.827 - 6.874: 98.8790% ( 1) 00:15:10.267 6.921 - 6.969: 98.8867% ( 1) 00:15:10.267 6.969 - 7.016: 98.8943% ( 1) 00:15:10.267 7.348 - 7.396: 98.9020% ( 1) 00:15:10.267 7.964 - 8.012: 98.9097% ( 1) 00:15:10.267 8.107 - 8.154: 98.9174% ( 1) 00:15:10.267 15.360 - 15.455: 98.9251% ( 1) 00:15:10.267 15.550 - 15.644: 98.9404% ( 2) 00:15:10.267 15.644 - 15.739: 98.9558% ( 2) 00:15:10.267 15.834 - 15.929: 98.9942% ( 5) 00:15:10.267 15.929 - 16.024: 99.0249% ( 4) 00:15:10.267 16.024 - 16.119: 99.0326% ( 1) 00:15:10.267 16.119 - 16.213: 99.0479% ( 2) 00:15:10.267 16.213 - 16.308: 99.0940% ( 6) 00:15:10.267 16.308 - 16.403: 99.1170% ( 3) 00:15:10.267 16.403 - 16.498: 99.1400% ( 3) 00:15:10.267 16.498 - 16.593: 99.2015% ( 8) 00:15:10.267 16.593 - 16.687: 99.2245% ( 3) 00:15:10.267 16.687 - 16.782: 99.3013% ( 10) 00:15:10.267 16.782 - 16.877: 99.3320% ( 4) 00:15:10.267 16.877 - 16.972: 99.3474% ( 2) 00:15:10.267 17.067 - 17.161: 99.3550% ( 1) 00:15:10.267 17.161 - 17.256: 99.3627% ( 1) 00:15:10.267 17.256 - 17.351: 99.3704% ( 1) 00:15:10.267 17.446 - 17.541: 99.3781% ( 1) 00:15:10.267 17.636 - 17.730: 99.3857% ( 1) 00:15:10.267 17.730 - 17.825: 99.3934% ( 1) 00:15:10.267 17.825 - 17.920: 99.4011% ( 1) 00:15:10.267 18.204 - 18.299: 99.4088% ( 1) 00:15:10.267 3058.347 - 3070.483: 99.4165% ( 1) 00:15:10.267 3980.705 - 4004.978: 99.8771% ( 60) 00:15:10.267 4004.978 - 4029.250: 99.9923% ( 15) 00:15:10.267 4077.796 - 4102.068: 100.0000% ( 1) 00:15:10.267 00:15:10.267 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:10.267 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:10.267 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:10.267 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:10.267 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:10.525 [ 00:15:10.525 { 00:15:10.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:10.525 "subtype": "Discovery", 00:15:10.525 "listen_addresses": [], 00:15:10.525 "allow_any_host": true, 00:15:10.525 "hosts": [] 00:15:10.525 }, 00:15:10.525 { 00:15:10.525 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:10.525 "subtype": "NVMe", 00:15:10.525 "listen_addresses": [ 00:15:10.525 { 00:15:10.525 "trtype": "VFIOUSER", 00:15:10.525 "adrfam": "IPv4", 00:15:10.525 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:10.525 "trsvcid": "0" 00:15:10.525 } 00:15:10.525 ], 00:15:10.525 "allow_any_host": true, 00:15:10.525 "hosts": [], 00:15:10.525 "serial_number": "SPDK1", 00:15:10.525 "model_number": "SPDK bdev Controller", 00:15:10.525 "max_namespaces": 32, 00:15:10.525 "min_cntlid": 1, 00:15:10.525 "max_cntlid": 65519, 00:15:10.525 "namespaces": [ 00:15:10.525 { 00:15:10.525 "nsid": 1, 00:15:10.525 "bdev_name": "Malloc1", 00:15:10.525 "name": "Malloc1", 00:15:10.525 "nguid": "AE32CB81A0D34A61BB151848F6224B2E", 00:15:10.525 "uuid": "ae32cb81-a0d3-4a61-bb15-1848f6224b2e" 00:15:10.525 }, 00:15:10.525 { 00:15:10.525 "nsid": 2, 00:15:10.525 "bdev_name": "Malloc3", 00:15:10.525 "name": "Malloc3", 00:15:10.525 "nguid": "DE8FEAF8727247D1A0FAFCD57D9F0FFD", 00:15:10.525 "uuid": "de8feaf8-7272-47d1-a0fa-fcd57d9f0ffd" 00:15:10.525 } 00:15:10.525 ] 00:15:10.525 }, 00:15:10.525 { 00:15:10.525 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:10.525 "subtype": "NVMe", 00:15:10.525 "listen_addresses": [ 00:15:10.525 { 00:15:10.525 "trtype": "VFIOUSER", 00:15:10.525 "adrfam": "IPv4", 00:15:10.525 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:10.525 "trsvcid": "0" 00:15:10.525 } 00:15:10.525 ], 00:15:10.525 "allow_any_host": true, 00:15:10.525 "hosts": [], 00:15:10.525 "serial_number": "SPDK2", 00:15:10.525 "model_number": "SPDK bdev Controller", 00:15:10.525 "max_namespaces": 32, 00:15:10.525 "min_cntlid": 1, 00:15:10.525 "max_cntlid": 65519, 00:15:10.526 "namespaces": [ 00:15:10.526 { 00:15:10.526 "nsid": 1, 00:15:10.526 "bdev_name": "Malloc2", 00:15:10.526 "name": "Malloc2", 00:15:10.526 "nguid": "DF0B349EF8D84FA693DB5309C6809487", 00:15:10.526 "uuid": "df0b349e-f8d8-4fa6-93db-5309c6809487" 00:15:10.526 } 00:15:10.526 ] 00:15:10.526 } 00:15:10.526 ] 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3541641 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:10.526 13:23:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:10.526 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.526 [2024-07-12 13:23:07.914775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.783 Malloc4 00:15:10.784 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:11.041 [2024-07-12 13:23:08.272371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.041 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.041 Asynchronous Event Request test 00:15:11.041 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.041 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.041 Registering asynchronous event callbacks... 00:15:11.041 Starting namespace attribute notice tests for all controllers... 00:15:11.041 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.041 aer_cb - Changed Namespace 00:15:11.041 Cleaning up... 00:15:11.299 [ 00:15:11.299 { 00:15:11.299 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.299 "subtype": "Discovery", 00:15:11.299 "listen_addresses": [], 00:15:11.299 "allow_any_host": true, 00:15:11.299 "hosts": [] 00:15:11.299 }, 00:15:11.299 { 00:15:11.299 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.299 "subtype": "NVMe", 00:15:11.299 "listen_addresses": [ 00:15:11.299 { 00:15:11.299 "trtype": "VFIOUSER", 00:15:11.299 "adrfam": "IPv4", 00:15:11.299 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.299 "trsvcid": "0" 00:15:11.299 } 00:15:11.299 ], 00:15:11.299 "allow_any_host": true, 00:15:11.299 "hosts": [], 00:15:11.299 "serial_number": "SPDK1", 00:15:11.299 "model_number": "SPDK bdev Controller", 00:15:11.299 "max_namespaces": 32, 00:15:11.299 "min_cntlid": 1, 00:15:11.299 "max_cntlid": 65519, 00:15:11.299 "namespaces": [ 00:15:11.299 { 00:15:11.299 "nsid": 1, 00:15:11.299 "bdev_name": "Malloc1", 00:15:11.299 "name": "Malloc1", 00:15:11.299 "nguid": "AE32CB81A0D34A61BB151848F6224B2E", 00:15:11.299 "uuid": "ae32cb81-a0d3-4a61-bb15-1848f6224b2e" 00:15:11.299 }, 00:15:11.299 { 00:15:11.299 "nsid": 2, 00:15:11.299 "bdev_name": "Malloc3", 00:15:11.299 "name": "Malloc3", 00:15:11.299 "nguid": "DE8FEAF8727247D1A0FAFCD57D9F0FFD", 00:15:11.299 "uuid": "de8feaf8-7272-47d1-a0fa-fcd57d9f0ffd" 00:15:11.299 } 00:15:11.299 ] 00:15:11.299 }, 00:15:11.299 { 00:15:11.299 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.299 "subtype": "NVMe", 00:15:11.299 "listen_addresses": [ 00:15:11.299 { 00:15:11.299 "trtype": "VFIOUSER", 00:15:11.299 "adrfam": "IPv4", 00:15:11.299 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.299 "trsvcid": "0" 00:15:11.299 } 00:15:11.299 ], 00:15:11.299 "allow_any_host": true, 00:15:11.299 "hosts": [], 00:15:11.299 "serial_number": "SPDK2", 00:15:11.299 "model_number": "SPDK bdev Controller", 00:15:11.299 "max_namespaces": 32, 00:15:11.299 "min_cntlid": 1, 00:15:11.299 "max_cntlid": 65519, 00:15:11.299 "namespaces": [ 00:15:11.299 { 00:15:11.299 "nsid": 1, 00:15:11.299 "bdev_name": "Malloc2", 00:15:11.299 "name": "Malloc2", 00:15:11.299 "nguid": "DF0B349EF8D84FA693DB5309C6809487", 00:15:11.299 "uuid": "df0b349e-f8d8-4fa6-93db-5309c6809487" 00:15:11.299 }, 00:15:11.299 { 00:15:11.299 "nsid": 2, 00:15:11.299 "bdev_name": "Malloc4", 00:15:11.299 "name": "Malloc4", 00:15:11.299 "nguid": "6FA97FB426724503A84BAE39942F421E", 00:15:11.299 "uuid": "6fa97fb4-2672-4503-a84b-ae39942f421e" 00:15:11.299 } 00:15:11.299 ] 00:15:11.299 } 00:15:11.299 ] 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3541641 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3536109 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3536109 ']' 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3536109 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3536109 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3536109' 00:15:11.299 killing process with pid 3536109 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3536109 00:15:11.299 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3536109 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3541782 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3541782' 00:15:11.557 Process pid: 3541782 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3541782 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3541782 ']' 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.557 13:23:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:11.557 [2024-07-12 13:23:08.916722] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:11.557 [2024-07-12 13:23:08.917765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:15:11.557 [2024-07-12 13:23:08.917826] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.557 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.558 [2024-07-12 13:23:08.950387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.558 [2024-07-12 13:23:08.977935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.816 [2024-07-12 13:23:09.060713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.816 [2024-07-12 13:23:09.060769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.816 [2024-07-12 13:23:09.060796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.816 [2024-07-12 13:23:09.060806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.816 [2024-07-12 13:23:09.060815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.816 [2024-07-12 13:23:09.060899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.816 [2024-07-12 13:23:09.061007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.816 [2024-07-12 13:23:09.061142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.816 [2024-07-12 13:23:09.061145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.816 [2024-07-12 13:23:09.151271] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:11.816 [2024-07-12 13:23:09.151496] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:11.816 [2024-07-12 13:23:09.151719] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:11.816 [2024-07-12 13:23:09.152279] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:11.816 [2024-07-12 13:23:09.152557] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:11.816 13:23:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.816 13:23:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:11.816 13:23:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:12.750 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:13.009 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:13.009 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:13.009 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.009 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:13.009 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:13.269 Malloc1 00:15:13.269 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:13.527 13:23:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:13.785 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:14.042 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.042 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:14.042 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:14.300 Malloc2 00:15:14.300 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:14.558 13:23:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:14.817 13:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3541782 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3541782 ']' 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3541782 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3541782 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3541782' 00:15:15.124 killing process with pid 3541782 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3541782 00:15:15.124 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3541782 00:15:15.382 13:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:15.382 13:23:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:15.382 00:15:15.382 real 0m52.475s 00:15:15.382 user 3m27.594s 00:15:15.382 sys 0m4.306s 00:15:15.382 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.382 13:23:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:15.382 ************************************ 00:15:15.382 END TEST nvmf_vfio_user 00:15:15.382 ************************************ 00:15:15.382 13:23:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.382 13:23:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:15.382 13:23:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.382 13:23:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.382 13:23:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.382 ************************************ 00:15:15.382 START TEST nvmf_vfio_user_nvme_compliance 00:15:15.382 ************************************ 00:15:15.382 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:15.641 * Looking for test storage... 00:15:15.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3542375 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3542375' 00:15:15.641 Process pid: 3542375 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3542375 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3542375 ']' 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.641 13:23:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 [2024-07-12 13:23:12.917311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:15:15.641 [2024-07-12 13:23:12.917404] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.641 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.641 [2024-07-12 13:23:12.948285] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:15.641 [2024-07-12 13:23:12.975939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.641 [2024-07-12 13:23:13.061981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.641 [2024-07-12 13:23:13.062028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.641 [2024-07-12 13:23:13.062058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.641 [2024-07-12 13:23:13.062071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.641 [2024-07-12 13:23:13.062082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.641 [2024-07-12 13:23:13.062166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.641 [2024-07-12 13:23:13.062240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.641 [2024-07-12 13:23:13.062236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.899 13:23:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.899 13:23:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:15.899 13:23:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.830 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.830 malloc0 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.831 13:23:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:17.088 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.088 00:15:17.088 00:15:17.088 CUnit - A unit testing framework for C - Version 2.1-3 00:15:17.088 http://cunit.sourceforge.net/ 00:15:17.088 00:15:17.088 00:15:17.088 Suite: nvme_compliance 00:15:17.088 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 13:23:14.418510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.088 [2024-07-12 13:23:14.419960] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:17.088 [2024-07-12 13:23:14.419984] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:17.088 [2024-07-12 13:23:14.420011] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:17.088 [2024-07-12 13:23:14.421533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.088 passed 00:15:17.088 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 13:23:14.512163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.088 [2024-07-12 13:23:14.515182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.088 passed 00:15:17.346 Test: admin_identify_ns ...[2024-07-12 13:23:14.602877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.346 [2024-07-12 13:23:14.662335] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:17.346 [2024-07-12 13:23:14.670336] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:17.346 [2024-07-12 13:23:14.691461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.346 passed 00:15:17.346 Test: admin_get_features_mandatory_features ...[2024-07-12 13:23:14.776239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.346 [2024-07-12 13:23:14.779262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.346 passed 00:15:17.603 Test: admin_get_features_optional_features ...[2024-07-12 13:23:14.867869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.603 [2024-07-12 13:23:14.870891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.603 passed 00:15:17.603 Test: admin_set_features_number_of_queues ...[2024-07-12 13:23:14.957149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.603 [2024-07-12 13:23:15.062430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.860 passed 00:15:17.860 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 13:23:15.146293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.860 [2024-07-12 13:23:15.149335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.860 passed 00:15:17.860 Test: admin_get_log_page_with_lpo ...[2024-07-12 13:23:15.233565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.860 [2024-07-12 13:23:15.301331] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:17.860 [2024-07-12 13:23:15.314416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.118 passed 00:15:18.118 Test: fabric_property_get ...[2024-07-12 13:23:15.404160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.118 [2024-07-12 13:23:15.405456] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:18.118 [2024-07-12 13:23:15.407183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.118 passed 00:15:18.118 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 13:23:15.492766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.118 [2024-07-12 13:23:15.494045] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:18.118 [2024-07-12 13:23:15.495787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.118 passed 00:15:18.118 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 13:23:15.578986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.376 [2024-07-12 13:23:15.663331] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.376 [2024-07-12 13:23:15.679330] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.376 [2024-07-12 13:23:15.684420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.376 passed 00:15:18.376 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 13:23:15.767934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.376 [2024-07-12 13:23:15.769224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:18.376 [2024-07-12 13:23:15.770958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.376 passed 00:15:18.634 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 13:23:15.859050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.634 [2024-07-12 13:23:15.934343] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:18.634 [2024-07-12 13:23:15.958327] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.634 [2024-07-12 13:23:15.963448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.634 passed 00:15:18.634 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 13:23:16.047167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.634 [2024-07-12 13:23:16.048487] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:18.634 [2024-07-12 13:23:16.048542] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:18.634 [2024-07-12 13:23:16.050192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.634 passed 00:15:18.892 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 13:23:16.133934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.892 [2024-07-12 13:23:16.226340] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:18.892 [2024-07-12 13:23:16.234327] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:18.892 [2024-07-12 13:23:16.242328] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:18.892 [2024-07-12 13:23:16.250329] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:18.892 [2024-07-12 13:23:16.279422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.892 passed 00:15:18.892 Test: admin_create_io_sq_verify_pc ...[2024-07-12 13:23:16.360034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.150 [2024-07-12 13:23:16.379340] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:19.150 [2024-07-12 13:23:16.397412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.150 passed 00:15:19.150 Test: admin_create_io_qp_max_qps ...[2024-07-12 13:23:16.477964] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.524 [2024-07-12 13:23:17.583332] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:20.524 [2024-07-12 13:23:17.973577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.783 passed 00:15:20.783 Test: admin_create_io_sq_shared_cq ...[2024-07-12 13:23:18.057967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.783 [2024-07-12 13:23:18.190338] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:20.783 [2024-07-12 13:23:18.227411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.041 passed 00:15:21.041 00:15:21.041 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.041 suites 1 1 n/a 0 0 00:15:21.041 tests 18 18 18 0 0 00:15:21.041 asserts 360 360 360 0 n/a 00:15:21.041 00:15:21.041 Elapsed time = 1.585 seconds 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3542375 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3542375 ']' 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3542375 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3542375 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3542375' 00:15:21.041 killing process with pid 3542375 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3542375 00:15:21.041 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3542375 00:15:21.299 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:21.299 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:21.299 00:15:21.299 real 0m5.751s 00:15:21.299 user 0m16.196s 00:15:21.299 sys 0m0.559s 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.300 ************************************ 00:15:21.300 END TEST nvmf_vfio_user_nvme_compliance 00:15:21.300 ************************************ 00:15:21.300 13:23:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.300 13:23:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:21.300 13:23:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.300 13:23:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.300 13:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.300 ************************************ 00:15:21.300 START TEST nvmf_vfio_user_fuzz 00:15:21.300 ************************************ 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:21.300 * Looking for test storage... 00:15:21.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3543090 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3543090' 00:15:21.300 Process pid: 3543090 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3543090 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3543090 ']' 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.300 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.558 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:21.558 13:23:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:22.932 13:23:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:22.932 13:23:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.932 13:23:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.932 malloc0 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:22.932 13:23:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:54.986 Fuzzing completed. Shutting down the fuzz application 00:15:54.986 00:15:54.986 Dumping successful admin opcodes: 00:15:54.986 8, 9, 10, 24, 00:15:54.986 Dumping successful io opcodes: 00:15:54.986 0, 00:15:54.986 NS: 0x200003a1ef00 I/O qp, Total commands completed: 668680, total successful commands: 2609, random_seed: 3075216448 00:15:54.986 NS: 0x200003a1ef00 admin qp, Total commands completed: 85904, total successful commands: 687, random_seed: 840352064 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3543090 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3543090 ']' 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3543090 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.986 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3543090 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3543090' 00:15:54.987 killing process with pid 3543090 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3543090 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3543090 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:54.987 00:15:54.987 real 0m32.170s 00:15:54.987 user 0m33.578s 00:15:54.987 sys 0m25.793s 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.987 13:23:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.987 ************************************ 00:15:54.987 END TEST nvmf_vfio_user_fuzz 00:15:54.987 ************************************ 00:15:54.987 13:23:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:54.987 13:23:50 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:54.987 13:23:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:54.987 13:23:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.987 13:23:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.987 ************************************ 00:15:54.987 START TEST nvmf_host_management 00:15:54.987 ************************************ 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:54.987 * Looking for test storage... 00:15:54.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.987 13:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:55.553 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:55.553 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:55.553 Found net devices under 0000:09:00.0: cvl_0_0 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:55.553 Found net devices under 0000:09:00.1: cvl_0_1 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:15:55.553 00:15:55.553 --- 10.0.0.2 ping statistics --- 00:15:55.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.553 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:55.553 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:15:55.553 00:15:55.553 --- 10.0.0.1 ping statistics --- 00:15:55.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.553 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.554 13:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3548413 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3548413 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3548413 ']' 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.554 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.812 [2024-07-12 13:23:53.058097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:15:55.812 [2024-07-12 13:23:53.058182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.812 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.812 [2024-07-12 13:23:53.095884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:55.812 [2024-07-12 13:23:53.121671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.812 [2024-07-12 13:23:53.212277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.812 [2024-07-12 13:23:53.212355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.812 [2024-07-12 13:23:53.212370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.812 [2024-07-12 13:23:53.212381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.812 [2024-07-12 13:23:53.212404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.812 [2024-07-12 13:23:53.212504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.812 [2024-07-12 13:23:53.212577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.812 [2024-07-12 13:23:53.212647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:55.812 [2024-07-12 13:23:53.212649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.070 [2024-07-12 13:23:53.368105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.070 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 Malloc0 00:15:56.071 [2024-07-12 13:23:53.429755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3548576 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3548576 /var/tmp/bdevperf.sock 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3548576 ']' 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.071 { 00:15:56.071 "params": { 00:15:56.071 "name": "Nvme$subsystem", 00:15:56.071 "trtype": "$TEST_TRANSPORT", 00:15:56.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.071 "adrfam": "ipv4", 00:15:56.071 "trsvcid": "$NVMF_PORT", 00:15:56.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.071 "hdgst": ${hdgst:-false}, 00:15:56.071 "ddgst": ${ddgst:-false} 00:15:56.071 }, 00:15:56.071 "method": "bdev_nvme_attach_controller" 00:15:56.071 } 00:15:56.071 EOF 00:15:56.071 )") 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:56.071 13:23:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.071 "params": { 00:15:56.071 "name": "Nvme0", 00:15:56.071 "trtype": "tcp", 00:15:56.071 "traddr": "10.0.0.2", 00:15:56.071 "adrfam": "ipv4", 00:15:56.071 "trsvcid": "4420", 00:15:56.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:56.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:56.071 "hdgst": false, 00:15:56.071 "ddgst": false 00:15:56.071 }, 00:15:56.071 "method": "bdev_nvme_attach_controller" 00:15:56.071 }' 00:15:56.071 [2024-07-12 13:23:53.510232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:15:56.071 [2024-07-12 13:23:53.510336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548576 ] 00:15:56.071 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.071 [2024-07-12 13:23:53.542659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.328 [2024-07-12 13:23:53.571843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.328 [2024-07-12 13:23:53.658565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.613 Running I/O for 10 seconds... 00:15:56.613 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.613 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:56.613 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:56.613 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.613 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:56.614 13:23:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.874 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.875 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.875 [2024-07-12 13:23:54.236464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88900 is same with the state(5) to be set 00:15:56.875 [2024-07-12 13:23:54.237099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.237977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.237991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.238008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.238023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.238040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.238055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.238071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.238085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.238101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.875 [2024-07-12 13:23:54.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.875 [2024-07-12 13:23:54.238130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.238979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.238994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.239009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.876 [2024-07-12 13:23:54.239024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.876 [2024-07-12 13:23:54.239039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.877 [2024-07-12 13:23:54.239073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.877 [2024-07-12 13:23:54.239103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:56.877 [2024-07-12 13:23:54.239133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x287ae10 was disconnected and freed. reset controller. 00:15:56.877 [2024-07-12 13:23:54.239283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.877 [2024-07-12 13:23:54.239305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.877 [2024-07-12 13:23:54.239345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.877 [2024-07-12 13:23:54.239374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.877 [2024-07-12 13:23:54.239403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.877 [2024-07-12 13:23:54.239416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2469b50 is same with the state(5) to be set 00:15:56.877 [2024-07-12 13:23:54.240540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:56.877 task offset: 69888 on job bdev=Nvme0n1 fails 00:15:56.877 00:15:56.877 Latency(us) 00:15:56.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.877 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:56.877 Job: Nvme0n1 ended in about 0.39 seconds with error 00:15:56.877 Verification LBA range: start 0x0 length 0x400 00:15:56.877 Nvme0n1 : 0.39 1298.99 81.19 162.37 0.00 42566.09 2997.67 35146.71 00:15:56.877 =================================================================================================================== 00:15:56.877 Total : 1298.99 81.19 162.37 0.00 42566.09 2997.67 35146.71 00:15:56.877 [2024-07-12 13:23:54.242442] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:56.877 [2024-07-12 13:23:54.242476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2469b50 (9): Bad file descriptor 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.877 13:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:57.135 [2024-07-12 13:23:54.346477] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3548576 00:15:58.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3548576) - No such process 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:58.068 { 00:15:58.068 "params": { 00:15:58.068 "name": "Nvme$subsystem", 00:15:58.068 "trtype": "$TEST_TRANSPORT", 00:15:58.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.068 "adrfam": "ipv4", 00:15:58.068 "trsvcid": "$NVMF_PORT", 00:15:58.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.068 "hdgst": ${hdgst:-false}, 00:15:58.068 "ddgst": ${ddgst:-false} 00:15:58.068 }, 00:15:58.068 "method": "bdev_nvme_attach_controller" 00:15:58.068 } 00:15:58.068 EOF 00:15:58.068 )") 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:58.068 13:23:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:58.068 "params": { 00:15:58.068 "name": "Nvme0", 00:15:58.068 "trtype": "tcp", 00:15:58.068 "traddr": "10.0.0.2", 00:15:58.068 "adrfam": "ipv4", 00:15:58.068 "trsvcid": "4420", 00:15:58.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:58.068 "hdgst": false, 00:15:58.068 "ddgst": false 00:15:58.068 }, 00:15:58.068 "method": "bdev_nvme_attach_controller" 00:15:58.068 }' 00:15:58.068 [2024-07-12 13:23:55.296783] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:15:58.068 [2024-07-12 13:23:55.296862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548739 ] 00:15:58.068 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.068 [2024-07-12 13:23:55.328981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:58.068 [2024-07-12 13:23:55.358340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.068 [2024-07-12 13:23:55.446471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.326 Running I/O for 1 seconds... 00:15:59.699 00:15:59.699 Latency(us) 00:15:59.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:59.699 Verification LBA range: start 0x0 length 0x400 00:15:59.699 Nvme0n1 : 1.03 1368.22 85.51 0.00 0.00 46084.78 9369.22 41554.68 00:15:59.699 =================================================================================================================== 00:15:59.699 Total : 1368.22 85.51 0.00 0.00 46084.78 9369.22 41554.68 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.699 rmmod nvme_tcp 00:15:59.699 rmmod nvme_fabrics 00:15:59.699 rmmod nvme_keyring 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3548413 ']' 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3548413 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3548413 ']' 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3548413 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3548413 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3548413' 00:15:59.699 killing process with pid 3548413 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3548413 00:15:59.699 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3548413 00:15:59.957 [2024-07-12 13:23:57.370074] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.957 13:23:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.491 13:23:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:02.491 13:23:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:02.491 00:16:02.491 real 0m8.609s 00:16:02.491 user 0m19.637s 00:16:02.491 sys 0m2.710s 00:16:02.491 13:23:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.491 13:23:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:02.491 ************************************ 00:16:02.491 END TEST nvmf_host_management 00:16:02.491 ************************************ 00:16:02.491 13:23:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.491 13:23:59 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:02.491 13:23:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.491 13:23:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.491 13:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.491 ************************************ 00:16:02.491 START TEST nvmf_lvol 00:16:02.491 ************************************ 00:16:02.491 13:23:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:02.491 * Looking for test storage... 00:16:02.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.492 13:23:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:04.392 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:04.392 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:04.392 Found net devices under 0000:09:00.0: cvl_0_0 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:04.392 Found net devices under 0000:09:00.1: cvl_0_1 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.392 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:04.393 00:16:04.393 --- 10.0.0.2 ping statistics --- 00:16:04.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.393 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:16:04.393 00:16:04.393 --- 10.0.0.1 ping statistics --- 00:16:04.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.393 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3551025 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3551025 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3551025 ']' 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.393 13:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 [2024-07-12 13:24:01.780219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:16:04.393 [2024-07-12 13:24:01.780324] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.393 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.393 [2024-07-12 13:24:01.818353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:04.393 [2024-07-12 13:24:01.844811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.651 [2024-07-12 13:24:01.933342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.651 [2024-07-12 13:24:01.933395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.651 [2024-07-12 13:24:01.933422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.651 [2024-07-12 13:24:01.933434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.651 [2024-07-12 13:24:01.933445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.651 [2024-07-12 13:24:01.933516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.651 [2024-07-12 13:24:01.935336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.651 [2024-07-12 13:24:01.935348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.651 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:04.908 [2024-07-12 13:24:02.307513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.908 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.165 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:05.165 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.422 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:05.422 13:24:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:05.678 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:05.935 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bda8640a-c5c3-4513-a74e-19546af772b4 00:16:05.935 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bda8640a-c5c3-4513-a74e-19546af772b4 lvol 20 00:16:06.191 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=88217a5b-17e6-4748-b5aa-5fe167f5a0a0 00:16:06.191 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:06.447 13:24:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88217a5b-17e6-4748-b5aa-5fe167f5a0a0 00:16:06.703 13:24:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:06.960 [2024-07-12 13:24:04.305666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.960 13:24:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:07.217 13:24:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3551465 00:16:07.217 13:24:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:07.217 13:24:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:07.217 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.151 13:24:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 88217a5b-17e6-4748-b5aa-5fe167f5a0a0 MY_SNAPSHOT 00:16:08.717 13:24:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=298a0983-3cc4-44cb-92e6-ddc01b46e8b6 00:16:08.717 13:24:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 88217a5b-17e6-4748-b5aa-5fe167f5a0a0 30 00:16:08.717 13:24:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 298a0983-3cc4-44cb-92e6-ddc01b46e8b6 MY_CLONE 00:16:08.974 13:24:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1d1b733e-9c26-40e1-a4aa-be45e38aca58 00:16:08.974 13:24:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1d1b733e-9c26-40e1-a4aa-be45e38aca58 00:16:09.542 13:24:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3551465 00:16:17.647 Initializing NVMe Controllers 00:16:17.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:17.647 Controller IO queue size 128, less than required. 00:16:17.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:17.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:17.647 Initialization complete. Launching workers. 00:16:17.647 ======================================================== 00:16:17.647 Latency(us) 00:16:17.647 Device Information : IOPS MiB/s Average min max 00:16:17.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9632.10 37.63 13289.98 4292.94 76605.22 00:16:17.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10853.60 42.40 11794.32 2192.43 74063.80 00:16:17.647 ======================================================== 00:16:17.647 Total : 20485.70 80.02 12497.56 2192.43 76605.22 00:16:17.647 00:16:17.647 13:24:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:17.935 13:24:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 88217a5b-17e6-4748-b5aa-5fe167f5a0a0 00:16:18.193 13:24:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bda8640a-c5c3-4513-a74e-19546af772b4 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.451 rmmod nvme_tcp 00:16:18.451 rmmod nvme_fabrics 00:16:18.451 rmmod nvme_keyring 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3551025 ']' 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3551025 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3551025 ']' 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3551025 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3551025 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3551025' 00:16:18.451 killing process with pid 3551025 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3551025 00:16:18.451 13:24:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3551025 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.709 13:24:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.252 13:24:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:21.252 00:16:21.252 real 0m18.702s 00:16:21.252 user 1m2.074s 00:16:21.252 sys 0m6.328s 00:16:21.252 13:24:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.252 13:24:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:21.252 ************************************ 00:16:21.252 END TEST nvmf_lvol 00:16:21.253 ************************************ 00:16:21.253 13:24:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:21.253 13:24:18 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:21.253 13:24:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:21.253 13:24:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.253 13:24:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.253 ************************************ 00:16:21.253 START TEST nvmf_lvs_grow 00:16:21.253 ************************************ 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:21.253 * Looking for test storage... 00:16:21.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.253 13:24:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:23.157 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:23.157 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:23.157 Found net devices under 0000:09:00.0: cvl_0_0 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:23.157 Found net devices under 0000:09:00.1: cvl_0_1 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:16:23.157 00:16:23.157 --- 10.0.0.2 ping statistics --- 00:16:23.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.157 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:23.157 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:16:23.158 00:16:23.158 --- 10.0.0.1 ping statistics --- 00:16:23.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.158 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3555231 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3555231 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3555231 ']' 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.158 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.416 [2024-07-12 13:24:20.629161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:16:23.416 [2024-07-12 13:24:20.629233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.416 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.416 [2024-07-12 13:24:20.665038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:23.416 [2024-07-12 13:24:20.691772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.416 [2024-07-12 13:24:20.775264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.416 [2024-07-12 13:24:20.775349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.416 [2024-07-12 13:24:20.775364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.416 [2024-07-12 13:24:20.775375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.416 [2024-07-12 13:24:20.775385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.416 [2024-07-12 13:24:20.775426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.416 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.416 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:23.416 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.416 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.416 13:24:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.674 13:24:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.674 13:24:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:23.674 [2024-07-12 13:24:21.112662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.674 13:24:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:23.674 13:24:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:23.674 13:24:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.674 13:24:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.933 ************************************ 00:16:23.933 START TEST lvs_grow_clean 00:16:23.933 ************************************ 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:23.933 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:24.190 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:24.190 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:24.449 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:24.449 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:24.449 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:24.707 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:24.707 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:24.707 13:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 lvol 150 00:16:24.965 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 00:16:24.965 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.965 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:24.965 [2024-07-12 13:24:22.418433] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:24.966 [2024-07-12 13:24:22.418537] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:24.966 true 00:16:24.966 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:24.966 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:25.223 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:25.223 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:25.480 13:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 00:16:25.737 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:25.995 [2024-07-12 13:24:23.397402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.995 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3555570 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3555570 /var/tmp/bdevperf.sock 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3555570 ']' 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.253 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:26.253 [2024-07-12 13:24:23.706411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:16:26.253 [2024-07-12 13:24:23.706493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555570 ] 00:16:26.509 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.509 [2024-07-12 13:24:23.741635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.509 [2024-07-12 13:24:23.770161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.509 [2024-07-12 13:24:23.856129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.509 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.509 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:26.509 13:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:27.073 Nvme0n1 00:16:27.073 13:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:27.330 [ 00:16:27.330 { 00:16:27.330 "name": "Nvme0n1", 00:16:27.330 "aliases": [ 00:16:27.330 "e2cd5c9e-8cf9-4de4-a11a-c540d8886f21" 00:16:27.330 ], 00:16:27.330 "product_name": "NVMe disk", 00:16:27.330 "block_size": 4096, 00:16:27.330 "num_blocks": 38912, 00:16:27.330 "uuid": "e2cd5c9e-8cf9-4de4-a11a-c540d8886f21", 00:16:27.330 "assigned_rate_limits": { 00:16:27.330 "rw_ios_per_sec": 0, 00:16:27.330 "rw_mbytes_per_sec": 0, 00:16:27.330 "r_mbytes_per_sec": 0, 00:16:27.330 "w_mbytes_per_sec": 0 00:16:27.330 }, 00:16:27.330 "claimed": false, 00:16:27.330 "zoned": false, 00:16:27.330 "supported_io_types": { 00:16:27.330 "read": true, 00:16:27.330 "write": true, 00:16:27.330 "unmap": true, 00:16:27.330 "flush": true, 00:16:27.330 "reset": true, 00:16:27.330 "nvme_admin": true, 00:16:27.330 "nvme_io": true, 00:16:27.330 "nvme_io_md": false, 00:16:27.330 "write_zeroes": true, 00:16:27.330 "zcopy": false, 00:16:27.330 "get_zone_info": false, 00:16:27.330 "zone_management": false, 00:16:27.330 "zone_append": false, 00:16:27.330 "compare": true, 00:16:27.330 "compare_and_write": true, 00:16:27.330 "abort": true, 00:16:27.330 "seek_hole": false, 00:16:27.330 "seek_data": false, 00:16:27.330 "copy": true, 00:16:27.330 "nvme_iov_md": false 00:16:27.330 }, 00:16:27.330 "memory_domains": [ 00:16:27.330 { 00:16:27.330 "dma_device_id": "system", 00:16:27.330 "dma_device_type": 1 00:16:27.330 } 00:16:27.330 ], 00:16:27.330 "driver_specific": { 00:16:27.330 "nvme": [ 00:16:27.330 { 00:16:27.330 "trid": { 00:16:27.330 "trtype": "TCP", 00:16:27.330 "adrfam": "IPv4", 00:16:27.330 "traddr": "10.0.0.2", 00:16:27.330 "trsvcid": "4420", 00:16:27.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.330 }, 00:16:27.330 "ctrlr_data": { 00:16:27.330 "cntlid": 1, 00:16:27.330 "vendor_id": "0x8086", 00:16:27.330 "model_number": "SPDK bdev Controller", 00:16:27.330 "serial_number": "SPDK0", 00:16:27.330 "firmware_revision": "24.09", 00:16:27.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.330 "oacs": { 00:16:27.330 "security": 0, 00:16:27.330 "format": 0, 00:16:27.330 "firmware": 0, 00:16:27.330 "ns_manage": 0 00:16:27.330 }, 00:16:27.330 "multi_ctrlr": true, 00:16:27.330 "ana_reporting": false 00:16:27.330 }, 00:16:27.330 "vs": { 00:16:27.330 "nvme_version": "1.3" 00:16:27.330 }, 00:16:27.330 "ns_data": { 00:16:27.330 "id": 1, 00:16:27.330 "can_share": true 00:16:27.330 } 00:16:27.330 } 00:16:27.330 ], 00:16:27.330 "mp_policy": "active_passive" 00:16:27.330 } 00:16:27.330 } 00:16:27.330 ] 00:16:27.331 13:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3555684 00:16:27.331 13:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:27.331 13:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:27.331 Running I/O for 10 seconds... 00:16:28.741 Latency(us) 00:16:28.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.741 Nvme0n1 : 1.00 15445.00 60.33 0.00 0.00 0.00 0.00 0.00 00:16:28.741 =================================================================================================================== 00:16:28.741 Total : 15445.00 60.33 0.00 0.00 0.00 0.00 0.00 00:16:28.741 00:16:29.305 13:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:29.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.564 Nvme0n1 : 2.00 15552.50 60.75 0.00 0.00 0.00 0.00 0.00 00:16:29.564 =================================================================================================================== 00:16:29.564 Total : 15552.50 60.75 0.00 0.00 0.00 0.00 0.00 00:16:29.564 00:16:29.564 true 00:16:29.564 13:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:29.564 13:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:29.822 13:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:29.822 13:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:29.822 13:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3555684 00:16:30.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.386 Nvme0n1 : 3.00 15649.67 61.13 0.00 0.00 0.00 0.00 0.00 00:16:30.386 =================================================================================================================== 00:16:30.386 Total : 15649.67 61.13 0.00 0.00 0.00 0.00 0.00 00:16:30.386 00:16:31.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.317 Nvme0n1 : 4.00 15714.75 61.39 0.00 0.00 0.00 0.00 0.00 00:16:31.317 =================================================================================================================== 00:16:31.317 Total : 15714.75 61.39 0.00 0.00 0.00 0.00 0.00 00:16:31.317 00:16:32.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.691 Nvme0n1 : 5.00 15757.60 61.55 0.00 0.00 0.00 0.00 0.00 00:16:32.691 =================================================================================================================== 00:16:32.691 Total : 15757.60 61.55 0.00 0.00 0.00 0.00 0.00 00:16:32.691 00:16:33.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.623 Nvme0n1 : 6.00 15824.00 61.81 0.00 0.00 0.00 0.00 0.00 00:16:33.623 =================================================================================================================== 00:16:33.623 Total : 15824.00 61.81 0.00 0.00 0.00 0.00 0.00 00:16:33.623 00:16:34.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.583 Nvme0n1 : 7.00 15871.71 62.00 0.00 0.00 0.00 0.00 0.00 00:16:34.583 =================================================================================================================== 00:16:34.583 Total : 15871.71 62.00 0.00 0.00 0.00 0.00 0.00 00:16:34.583 00:16:35.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.512 Nvme0n1 : 8.00 15895.00 62.09 0.00 0.00 0.00 0.00 0.00 00:16:35.512 =================================================================================================================== 00:16:35.512 Total : 15895.00 62.09 0.00 0.00 0.00 0.00 0.00 00:16:35.512 00:16:36.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.442 Nvme0n1 : 9.00 15938.56 62.26 0.00 0.00 0.00 0.00 0.00 00:16:36.442 =================================================================================================================== 00:16:36.443 Total : 15938.56 62.26 0.00 0.00 0.00 0.00 0.00 00:16:36.443 00:16:37.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.375 Nvme0n1 : 10.00 15966.30 62.37 0.00 0.00 0.00 0.00 0.00 00:16:37.375 =================================================================================================================== 00:16:37.375 Total : 15966.30 62.37 0.00 0.00 0.00 0.00 0.00 00:16:37.375 00:16:37.375 00:16:37.375 Latency(us) 00:16:37.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.375 Nvme0n1 : 10.00 15972.74 62.39 0.00 0.00 8009.14 2815.62 16019.91 00:16:37.375 =================================================================================================================== 00:16:37.375 Total : 15972.74 62.39 0.00 0.00 8009.14 2815.62 16019.91 00:16:37.375 0 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3555570 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3555570 ']' 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3555570 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3555570 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3555570' 00:16:37.375 killing process with pid 3555570 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3555570 00:16:37.375 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.375 00:16:37.375 Latency(us) 00:16:37.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.375 =================================================================================================================== 00:16:37.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.375 13:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3555570 00:16:37.633 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.891 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:38.148 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:38.148 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:38.406 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:38.406 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:38.406 13:24:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:38.664 [2024-07-12 13:24:36.022024] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:38.664 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:38.922 request: 00:16:38.922 { 00:16:38.922 "uuid": "e8e5fbe6-0370-45ec-959c-82b3ec80c177", 00:16:38.922 "method": "bdev_lvol_get_lvstores", 00:16:38.922 "req_id": 1 00:16:38.922 } 00:16:38.922 Got JSON-RPC error response 00:16:38.922 response: 00:16:38.922 { 00:16:38.922 "code": -19, 00:16:38.922 "message": "No such device" 00:16:38.922 } 00:16:38.922 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:38.922 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:38.922 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:38.922 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:38.922 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:39.180 aio_bdev 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.180 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.437 13:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 -t 2000 00:16:39.694 [ 00:16:39.694 { 00:16:39.694 "name": "e2cd5c9e-8cf9-4de4-a11a-c540d8886f21", 00:16:39.694 "aliases": [ 00:16:39.694 "lvs/lvol" 00:16:39.694 ], 00:16:39.694 "product_name": "Logical Volume", 00:16:39.694 "block_size": 4096, 00:16:39.694 "num_blocks": 38912, 00:16:39.694 "uuid": "e2cd5c9e-8cf9-4de4-a11a-c540d8886f21", 00:16:39.694 "assigned_rate_limits": { 00:16:39.694 "rw_ios_per_sec": 0, 00:16:39.694 "rw_mbytes_per_sec": 0, 00:16:39.694 "r_mbytes_per_sec": 0, 00:16:39.694 "w_mbytes_per_sec": 0 00:16:39.694 }, 00:16:39.694 "claimed": false, 00:16:39.694 "zoned": false, 00:16:39.694 "supported_io_types": { 00:16:39.694 "read": true, 00:16:39.694 "write": true, 00:16:39.694 "unmap": true, 00:16:39.694 "flush": false, 00:16:39.694 "reset": true, 00:16:39.694 "nvme_admin": false, 00:16:39.694 "nvme_io": false, 00:16:39.694 "nvme_io_md": false, 00:16:39.694 "write_zeroes": true, 00:16:39.694 "zcopy": false, 00:16:39.694 "get_zone_info": false, 00:16:39.694 "zone_management": false, 00:16:39.694 "zone_append": false, 00:16:39.694 "compare": false, 00:16:39.694 "compare_and_write": false, 00:16:39.694 "abort": false, 00:16:39.694 "seek_hole": true, 00:16:39.694 "seek_data": true, 00:16:39.694 "copy": false, 00:16:39.694 "nvme_iov_md": false 00:16:39.694 }, 00:16:39.694 "driver_specific": { 00:16:39.694 "lvol": { 00:16:39.694 "lvol_store_uuid": "e8e5fbe6-0370-45ec-959c-82b3ec80c177", 00:16:39.694 "base_bdev": "aio_bdev", 00:16:39.694 "thin_provision": false, 00:16:39.694 "num_allocated_clusters": 38, 00:16:39.694 "snapshot": false, 00:16:39.694 "clone": false, 00:16:39.694 "esnap_clone": false 00:16:39.694 } 00:16:39.694 } 00:16:39.694 } 00:16:39.694 ] 00:16:39.694 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:39.694 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:39.694 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:39.951 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:39.951 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:39.951 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:40.208 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:40.208 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2cd5c9e-8cf9-4de4-a11a-c540d8886f21 00:16:40.466 13:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8e5fbe6-0370-45ec-959c-82b3ec80c177 00:16:40.723 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:40.981 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:40.981 00:16:40.981 real 0m17.280s 00:16:40.981 user 0m16.724s 00:16:40.981 sys 0m1.903s 00:16:40.981 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.981 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:40.981 ************************************ 00:16:40.981 END TEST lvs_grow_clean 00:16:40.981 ************************************ 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:41.240 ************************************ 00:16:41.240 START TEST lvs_grow_dirty 00:16:41.240 ************************************ 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:41.240 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:41.500 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:41.500 13:24:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:41.758 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d098c08-0594-4c38-a6f0-88609a613b74 00:16:41.758 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:41.758 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:42.015 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:42.015 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:42.015 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d098c08-0594-4c38-a6f0-88609a613b74 lvol 150 00:16:42.273 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:42.273 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.273 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:42.531 [2024-07-12 13:24:39.766421] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:42.531 [2024-07-12 13:24:39.766515] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:42.531 true 00:16:42.531 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:42.531 13:24:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:42.789 13:24:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:42.789 13:24:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:43.046 13:24:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:43.304 13:24:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:43.562 [2024-07-12 13:24:40.789467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.562 13:24:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3557712 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3557712 /var/tmp/bdevperf.sock 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3557712 ']' 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.819 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:43.819 [2024-07-12 13:24:41.094117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:16:43.819 [2024-07-12 13:24:41.094191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3557712 ] 00:16:43.819 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.819 [2024-07-12 13:24:41.125086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.819 [2024-07-12 13:24:41.152460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.819 [2024-07-12 13:24:41.238322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.076 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.076 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:44.076 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:44.333 Nvme0n1 00:16:44.333 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:44.590 [ 00:16:44.590 { 00:16:44.590 "name": "Nvme0n1", 00:16:44.590 "aliases": [ 00:16:44.590 "5fab99dd-f41a-4bfa-b3b3-ff09497be01c" 00:16:44.590 ], 00:16:44.590 "product_name": "NVMe disk", 00:16:44.590 "block_size": 4096, 00:16:44.590 "num_blocks": 38912, 00:16:44.590 "uuid": "5fab99dd-f41a-4bfa-b3b3-ff09497be01c", 00:16:44.590 "assigned_rate_limits": { 00:16:44.590 "rw_ios_per_sec": 0, 00:16:44.590 "rw_mbytes_per_sec": 0, 00:16:44.590 "r_mbytes_per_sec": 0, 00:16:44.590 "w_mbytes_per_sec": 0 00:16:44.590 }, 00:16:44.590 "claimed": false, 00:16:44.590 "zoned": false, 00:16:44.590 "supported_io_types": { 00:16:44.590 "read": true, 00:16:44.590 "write": true, 00:16:44.590 "unmap": true, 00:16:44.590 "flush": true, 00:16:44.590 "reset": true, 00:16:44.590 "nvme_admin": true, 00:16:44.590 "nvme_io": true, 00:16:44.590 "nvme_io_md": false, 00:16:44.590 "write_zeroes": true, 00:16:44.590 "zcopy": false, 00:16:44.590 "get_zone_info": false, 00:16:44.590 "zone_management": false, 00:16:44.590 "zone_append": false, 00:16:44.590 "compare": true, 00:16:44.590 "compare_and_write": true, 00:16:44.590 "abort": true, 00:16:44.590 "seek_hole": false, 00:16:44.590 "seek_data": false, 00:16:44.590 "copy": true, 00:16:44.590 "nvme_iov_md": false 00:16:44.590 }, 00:16:44.590 "memory_domains": [ 00:16:44.590 { 00:16:44.590 "dma_device_id": "system", 00:16:44.590 "dma_device_type": 1 00:16:44.590 } 00:16:44.590 ], 00:16:44.590 "driver_specific": { 00:16:44.590 "nvme": [ 00:16:44.590 { 00:16:44.590 "trid": { 00:16:44.590 "trtype": "TCP", 00:16:44.590 "adrfam": "IPv4", 00:16:44.590 "traddr": "10.0.0.2", 00:16:44.590 "trsvcid": "4420", 00:16:44.590 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:44.590 }, 00:16:44.590 "ctrlr_data": { 00:16:44.590 "cntlid": 1, 00:16:44.590 "vendor_id": "0x8086", 00:16:44.590 "model_number": "SPDK bdev Controller", 00:16:44.590 "serial_number": "SPDK0", 00:16:44.590 "firmware_revision": "24.09", 00:16:44.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:44.590 "oacs": { 00:16:44.590 "security": 0, 00:16:44.590 "format": 0, 00:16:44.590 "firmware": 0, 00:16:44.590 "ns_manage": 0 00:16:44.590 }, 00:16:44.590 "multi_ctrlr": true, 00:16:44.590 "ana_reporting": false 00:16:44.590 }, 00:16:44.590 "vs": { 00:16:44.591 "nvme_version": "1.3" 00:16:44.591 }, 00:16:44.591 "ns_data": { 00:16:44.591 "id": 1, 00:16:44.591 "can_share": true 00:16:44.591 } 00:16:44.591 } 00:16:44.591 ], 00:16:44.591 "mp_policy": "active_passive" 00:16:44.591 } 00:16:44.591 } 00:16:44.591 ] 00:16:44.591 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3557844 00:16:44.591 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:44.591 13:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:44.849 Running I/O for 10 seconds... 00:16:45.783 Latency(us) 00:16:45.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.783 Nvme0n1 : 1.00 15381.00 60.08 0.00 0.00 0.00 0.00 0.00 00:16:45.783 =================================================================================================================== 00:16:45.783 Total : 15381.00 60.08 0.00 0.00 0.00 0.00 0.00 00:16:45.783 00:16:46.722 13:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:46.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.722 Nvme0n1 : 2.00 15661.00 61.18 0.00 0.00 0.00 0.00 0.00 00:16:46.722 =================================================================================================================== 00:16:46.722 Total : 15661.00 61.18 0.00 0.00 0.00 0.00 0.00 00:16:46.722 00:16:46.979 true 00:16:46.979 13:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:46.979 13:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:47.236 13:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:47.236 13:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:47.236 13:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3557844 00:16:47.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.802 Nvme0n1 : 3.00 15674.67 61.23 0.00 0.00 0.00 0.00 0.00 00:16:47.802 =================================================================================================================== 00:16:47.802 Total : 15674.67 61.23 0.00 0.00 0.00 0.00 0.00 00:16:47.802 00:16:48.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.735 Nvme0n1 : 4.00 15770.50 61.60 0.00 0.00 0.00 0.00 0.00 00:16:48.735 =================================================================================================================== 00:16:48.735 Total : 15770.50 61.60 0.00 0.00 0.00 0.00 0.00 00:16:48.735 00:16:49.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.677 Nvme0n1 : 5.00 15820.80 61.80 0.00 0.00 0.00 0.00 0.00 00:16:49.677 =================================================================================================================== 00:16:49.677 Total : 15820.80 61.80 0.00 0.00 0.00 0.00 0.00 00:16:49.677 00:16:51.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.051 Nvme0n1 : 6.00 15902.50 62.12 0.00 0.00 0.00 0.00 0.00 00:16:51.051 =================================================================================================================== 00:16:51.051 Total : 15902.50 62.12 0.00 0.00 0.00 0.00 0.00 00:16:51.051 00:16:51.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.985 Nvme0n1 : 7.00 15944.86 62.28 0.00 0.00 0.00 0.00 0.00 00:16:51.985 =================================================================================================================== 00:16:51.985 Total : 15944.86 62.28 0.00 0.00 0.00 0.00 0.00 00:16:51.985 00:16:52.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.920 Nvme0n1 : 8.00 15977.00 62.41 0.00 0.00 0.00 0.00 0.00 00:16:52.920 =================================================================================================================== 00:16:52.920 Total : 15977.00 62.41 0.00 0.00 0.00 0.00 0.00 00:16:52.920 00:16:53.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.856 Nvme0n1 : 9.00 15982.78 62.43 0.00 0.00 0.00 0.00 0.00 00:16:53.856 =================================================================================================================== 00:16:53.856 Total : 15982.78 62.43 0.00 0.00 0.00 0.00 0.00 00:16:53.856 00:16:54.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.790 Nvme0n1 : 10.00 15999.80 62.50 0.00 0.00 0.00 0.00 0.00 00:16:54.790 =================================================================================================================== 00:16:54.790 Total : 15999.80 62.50 0.00 0.00 0.00 0.00 0.00 00:16:54.790 00:16:54.790 00:16:54.790 Latency(us) 00:16:54.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.790 Nvme0n1 : 10.01 16004.20 62.52 0.00 0.00 7993.22 2305.90 15728.64 00:16:54.790 =================================================================================================================== 00:16:54.790 Total : 16004.20 62.52 0.00 0.00 7993.22 2305.90 15728.64 00:16:54.790 0 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3557712 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3557712 ']' 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3557712 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3557712 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3557712' 00:16:54.790 killing process with pid 3557712 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3557712 00:16:54.790 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.790 00:16:54.790 Latency(us) 00:16:54.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.790 =================================================================================================================== 00:16:54.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.790 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3557712 00:16:55.047 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.306 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:55.564 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:55.564 13:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3555231 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3555231 00:16:55.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3555231 Killed "${NVMF_APP[@]}" "$@" 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3559051 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3559051 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3559051 ']' 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.821 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:55.821 [2024-07-12 13:24:53.185950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:16:55.821 [2024-07-12 13:24:53.186048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.821 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.821 [2024-07-12 13:24:53.226274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:55.821 [2024-07-12 13:24:53.252631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.078 [2024-07-12 13:24:53.339726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.078 [2024-07-12 13:24:53.339781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.078 [2024-07-12 13:24:53.339809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.079 [2024-07-12 13:24:53.339821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.079 [2024-07-12 13:24:53.339831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.079 [2024-07-12 13:24:53.339866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.079 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:56.336 [2024-07-12 13:24:53.739559] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:56.336 [2024-07-12 13:24:53.739679] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:56.336 [2024-07-12 13:24:53.739726] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.336 13:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:56.594 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5fab99dd-f41a-4bfa-b3b3-ff09497be01c -t 2000 00:16:57.159 [ 00:16:57.159 { 00:16:57.159 "name": "5fab99dd-f41a-4bfa-b3b3-ff09497be01c", 00:16:57.159 "aliases": [ 00:16:57.159 "lvs/lvol" 00:16:57.159 ], 00:16:57.159 "product_name": "Logical Volume", 00:16:57.159 "block_size": 4096, 00:16:57.159 "num_blocks": 38912, 00:16:57.159 "uuid": "5fab99dd-f41a-4bfa-b3b3-ff09497be01c", 00:16:57.159 "assigned_rate_limits": { 00:16:57.159 "rw_ios_per_sec": 0, 00:16:57.159 "rw_mbytes_per_sec": 0, 00:16:57.159 "r_mbytes_per_sec": 0, 00:16:57.159 "w_mbytes_per_sec": 0 00:16:57.159 }, 00:16:57.159 "claimed": false, 00:16:57.159 "zoned": false, 00:16:57.159 "supported_io_types": { 00:16:57.159 "read": true, 00:16:57.159 "write": true, 00:16:57.159 "unmap": true, 00:16:57.159 "flush": false, 00:16:57.159 "reset": true, 00:16:57.159 "nvme_admin": false, 00:16:57.159 "nvme_io": false, 00:16:57.159 "nvme_io_md": false, 00:16:57.159 "write_zeroes": true, 00:16:57.159 "zcopy": false, 00:16:57.159 "get_zone_info": false, 00:16:57.159 "zone_management": false, 00:16:57.159 "zone_append": false, 00:16:57.159 "compare": false, 00:16:57.159 "compare_and_write": false, 00:16:57.159 "abort": false, 00:16:57.159 "seek_hole": true, 00:16:57.159 "seek_data": true, 00:16:57.159 "copy": false, 00:16:57.159 "nvme_iov_md": false 00:16:57.159 }, 00:16:57.159 "driver_specific": { 00:16:57.159 "lvol": { 00:16:57.159 "lvol_store_uuid": "6d098c08-0594-4c38-a6f0-88609a613b74", 00:16:57.159 "base_bdev": "aio_bdev", 00:16:57.159 "thin_provision": false, 00:16:57.159 "num_allocated_clusters": 38, 00:16:57.159 "snapshot": false, 00:16:57.159 "clone": false, 00:16:57.159 "esnap_clone": false 00:16:57.159 } 00:16:57.159 } 00:16:57.159 } 00:16:57.159 ] 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:57.159 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:57.418 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:57.418 13:24:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:57.676 [2024-07-12 13:24:55.069066] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:57.676 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:57.934 request: 00:16:57.934 { 00:16:57.934 "uuid": "6d098c08-0594-4c38-a6f0-88609a613b74", 00:16:57.934 "method": "bdev_lvol_get_lvstores", 00:16:57.934 "req_id": 1 00:16:57.934 } 00:16:57.934 Got JSON-RPC error response 00:16:57.934 response: 00:16:57.934 { 00:16:57.934 "code": -19, 00:16:57.934 "message": "No such device" 00:16:57.934 } 00:16:57.934 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:57.934 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.934 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.934 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.934 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:58.192 aio_bdev 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.192 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:58.450 13:24:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5fab99dd-f41a-4bfa-b3b3-ff09497be01c -t 2000 00:16:58.708 [ 00:16:58.708 { 00:16:58.708 "name": "5fab99dd-f41a-4bfa-b3b3-ff09497be01c", 00:16:58.708 "aliases": [ 00:16:58.708 "lvs/lvol" 00:16:58.708 ], 00:16:58.708 "product_name": "Logical Volume", 00:16:58.708 "block_size": 4096, 00:16:58.708 "num_blocks": 38912, 00:16:58.708 "uuid": "5fab99dd-f41a-4bfa-b3b3-ff09497be01c", 00:16:58.708 "assigned_rate_limits": { 00:16:58.708 "rw_ios_per_sec": 0, 00:16:58.708 "rw_mbytes_per_sec": 0, 00:16:58.708 "r_mbytes_per_sec": 0, 00:16:58.708 "w_mbytes_per_sec": 0 00:16:58.708 }, 00:16:58.708 "claimed": false, 00:16:58.708 "zoned": false, 00:16:58.708 "supported_io_types": { 00:16:58.708 "read": true, 00:16:58.708 "write": true, 00:16:58.708 "unmap": true, 00:16:58.708 "flush": false, 00:16:58.708 "reset": true, 00:16:58.708 "nvme_admin": false, 00:16:58.708 "nvme_io": false, 00:16:58.708 "nvme_io_md": false, 00:16:58.708 "write_zeroes": true, 00:16:58.708 "zcopy": false, 00:16:58.708 "get_zone_info": false, 00:16:58.708 "zone_management": false, 00:16:58.708 "zone_append": false, 00:16:58.708 "compare": false, 00:16:58.708 "compare_and_write": false, 00:16:58.708 "abort": false, 00:16:58.708 "seek_hole": true, 00:16:58.708 "seek_data": true, 00:16:58.708 "copy": false, 00:16:58.708 "nvme_iov_md": false 00:16:58.708 }, 00:16:58.708 "driver_specific": { 00:16:58.708 "lvol": { 00:16:58.708 "lvol_store_uuid": "6d098c08-0594-4c38-a6f0-88609a613b74", 00:16:58.708 "base_bdev": "aio_bdev", 00:16:58.708 "thin_provision": false, 00:16:58.708 "num_allocated_clusters": 38, 00:16:58.708 "snapshot": false, 00:16:58.708 "clone": false, 00:16:58.708 "esnap_clone": false 00:16:58.708 } 00:16:58.708 } 00:16:58.708 } 00:16:58.708 ] 00:16:58.708 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:58.708 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:58.708 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:58.966 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:58.966 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:58.966 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:59.224 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:59.224 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5fab99dd-f41a-4bfa-b3b3-ff09497be01c 00:16:59.482 13:24:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d098c08-0594-4c38-a6f0-88609a613b74 00:16:59.740 13:24:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.999 00:16:59.999 real 0m18.951s 00:16:59.999 user 0m47.906s 00:16:59.999 sys 0m4.530s 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:59.999 ************************************ 00:16:59.999 END TEST lvs_grow_dirty 00:16:59.999 ************************************ 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:59.999 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:00.257 nvmf_trace.0 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.257 rmmod nvme_tcp 00:17:00.257 rmmod nvme_fabrics 00:17:00.257 rmmod nvme_keyring 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3559051 ']' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3559051 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3559051 ']' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3559051 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3559051 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3559051' 00:17:00.257 killing process with pid 3559051 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3559051 00:17:00.257 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3559051 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.516 13:24:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.417 13:24:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.417 00:17:02.417 real 0m41.618s 00:17:02.417 user 1m10.412s 00:17:02.417 sys 0m8.393s 00:17:02.417 13:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.417 13:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.417 ************************************ 00:17:02.417 END TEST nvmf_lvs_grow 00:17:02.417 ************************************ 00:17:02.417 13:24:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.417 13:24:59 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:02.417 13:24:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.417 13:24:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.417 13:24:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.674 ************************************ 00:17:02.674 START TEST nvmf_bdev_io_wait 00:17:02.674 ************************************ 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:02.674 * Looking for test storage... 00:17:02.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.674 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.675 13:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:05.204 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:05.204 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.204 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:05.205 Found net devices under 0000:09:00.0: cvl_0_0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:05.205 Found net devices under 0000:09:00.1: cvl_0_1 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:17:05.205 00:17:05.205 --- 10.0.0.2 ping statistics --- 00:17:05.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.205 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:05.205 00:17:05.205 --- 10.0.0.1 ping statistics --- 00:17:05.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.205 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3561574 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3561574 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3561574 ']' 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.205 [2024-07-12 13:25:02.279653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:05.205 [2024-07-12 13:25:02.279740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.205 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.205 [2024-07-12 13:25:02.320760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.205 [2024-07-12 13:25:02.348326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.205 [2024-07-12 13:25:02.432936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.205 [2024-07-12 13:25:02.432987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.205 [2024-07-12 13:25:02.433015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.205 [2024-07-12 13:25:02.433026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.205 [2024-07-12 13:25:02.433036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.205 [2024-07-12 13:25:02.433130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.205 [2024-07-12 13:25:02.433280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.205 [2024-07-12 13:25:02.433340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.205 [2024-07-12 13:25:02.433344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.205 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 [2024-07-12 13:25:02.594699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 Malloc0 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 [2024-07-12 13:25:02.659969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3561713 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3561714 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3561717 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.206 { 00:17:05.206 "params": { 00:17:05.206 "name": "Nvme$subsystem", 00:17:05.206 "trtype": "$TEST_TRANSPORT", 00:17:05.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.206 "adrfam": "ipv4", 00:17:05.206 "trsvcid": "$NVMF_PORT", 00:17:05.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.206 "hdgst": ${hdgst:-false}, 00:17:05.206 "ddgst": ${ddgst:-false} 00:17:05.206 }, 00:17:05.206 "method": "bdev_nvme_attach_controller" 00:17:05.206 } 00:17:05.206 EOF 00:17:05.206 )") 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.206 { 00:17:05.206 "params": { 00:17:05.206 "name": "Nvme$subsystem", 00:17:05.206 "trtype": "$TEST_TRANSPORT", 00:17:05.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.206 "adrfam": "ipv4", 00:17:05.206 "trsvcid": "$NVMF_PORT", 00:17:05.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.206 "hdgst": ${hdgst:-false}, 00:17:05.206 "ddgst": ${ddgst:-false} 00:17:05.206 }, 00:17:05.206 "method": "bdev_nvme_attach_controller" 00:17:05.206 } 00:17:05.206 EOF 00:17:05.206 )") 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3561719 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.206 { 00:17:05.206 "params": { 00:17:05.206 "name": "Nvme$subsystem", 00:17:05.206 "trtype": "$TEST_TRANSPORT", 00:17:05.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.206 "adrfam": "ipv4", 00:17:05.206 "trsvcid": "$NVMF_PORT", 00:17:05.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.206 "hdgst": ${hdgst:-false}, 00:17:05.206 "ddgst": ${ddgst:-false} 00:17:05.206 }, 00:17:05.206 "method": "bdev_nvme_attach_controller" 00:17:05.206 } 00:17:05.206 EOF 00:17:05.206 )") 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.206 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.206 { 00:17:05.206 "params": { 00:17:05.206 "name": "Nvme$subsystem", 00:17:05.206 "trtype": "$TEST_TRANSPORT", 00:17:05.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.206 "adrfam": "ipv4", 00:17:05.207 "trsvcid": "$NVMF_PORT", 00:17:05.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.207 "hdgst": ${hdgst:-false}, 00:17:05.207 "ddgst": ${ddgst:-false} 00:17:05.207 }, 00:17:05.207 "method": "bdev_nvme_attach_controller" 00:17:05.207 } 00:17:05.207 EOF 00:17:05.207 )") 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3561713 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.207 "params": { 00:17:05.207 "name": "Nvme1", 00:17:05.207 "trtype": "tcp", 00:17:05.207 "traddr": "10.0.0.2", 00:17:05.207 "adrfam": "ipv4", 00:17:05.207 "trsvcid": "4420", 00:17:05.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.207 "hdgst": false, 00:17:05.207 "ddgst": false 00:17:05.207 }, 00:17:05.207 "method": "bdev_nvme_attach_controller" 00:17:05.207 }' 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.207 "params": { 00:17:05.207 "name": "Nvme1", 00:17:05.207 "trtype": "tcp", 00:17:05.207 "traddr": "10.0.0.2", 00:17:05.207 "adrfam": "ipv4", 00:17:05.207 "trsvcid": "4420", 00:17:05.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.207 "hdgst": false, 00:17:05.207 "ddgst": false 00:17:05.207 }, 00:17:05.207 "method": "bdev_nvme_attach_controller" 00:17:05.207 }' 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.207 "params": { 00:17:05.207 "name": "Nvme1", 00:17:05.207 "trtype": "tcp", 00:17:05.207 "traddr": "10.0.0.2", 00:17:05.207 "adrfam": "ipv4", 00:17:05.207 "trsvcid": "4420", 00:17:05.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.207 "hdgst": false, 00:17:05.207 "ddgst": false 00:17:05.207 }, 00:17:05.207 "method": "bdev_nvme_attach_controller" 00:17:05.207 }' 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.207 13:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.207 "params": { 00:17:05.207 "name": "Nvme1", 00:17:05.207 "trtype": "tcp", 00:17:05.207 "traddr": "10.0.0.2", 00:17:05.207 "adrfam": "ipv4", 00:17:05.207 "trsvcid": "4420", 00:17:05.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.207 "hdgst": false, 00:17:05.207 "ddgst": false 00:17:05.207 }, 00:17:05.207 "method": "bdev_nvme_attach_controller" 00:17:05.207 }' 00:17:05.466 [2024-07-12 13:25:02.708689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:05.466 [2024-07-12 13:25:02.708684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:05.466 [2024-07-12 13:25:02.708685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:05.466 [2024-07-12 13:25:02.708685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:05.466 [2024-07-12 13:25:02.708772] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:05.466 [2024-07-12 13:25:02.708788] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 13:25:02.708787] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 13:25:02.708788] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:05.466 --proc-type=auto ] 00:17:05.466 --proc-type=auto ] 00:17:05.466 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.466 [2024-07-12 13:25:02.858560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.466 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.466 [2024-07-12 13:25:02.886675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.742 [2024-07-12 13:25:02.962355] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.742 [2024-07-12 13:25:02.965689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:05.742 [2024-07-12 13:25:02.992019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.742 [2024-07-12 13:25:03.063212] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.742 [2024-07-12 13:25:03.070067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:05.742 [2024-07-12 13:25:03.093672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.742 [2024-07-12 13:25:03.139783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.742 [2024-07-12 13:25:03.169426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.742 [2024-07-12 13:25:03.171339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:05.999 [2024-07-12 13:25:03.238583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:05.999 Running I/O for 1 seconds... 00:17:05.999 Running I/O for 1 seconds... 00:17:05.999 Running I/O for 1 seconds... 00:17:05.999 Running I/O for 1 seconds... 00:17:06.941 00:17:06.941 Latency(us) 00:17:06.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.941 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:06.941 Nvme1n1 : 1.00 198796.41 776.55 0.00 0.00 641.50 273.07 885.95 00:17:06.941 =================================================================================================================== 00:17:06.941 Total : 198796.41 776.55 0.00 0.00 641.50 273.07 885.95 00:17:06.941 00:17:06.941 Latency(us) 00:17:06.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.941 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:06.941 Nvme1n1 : 1.03 5363.64 20.95 0.00 0.00 23536.35 8155.59 35340.89 00:17:06.941 =================================================================================================================== 00:17:06.941 Total : 5363.64 20.95 0.00 0.00 23536.35 8155.59 35340.89 00:17:07.200 00:17:07.200 Latency(us) 00:17:07.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.200 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:07.200 Nvme1n1 : 1.01 4971.52 19.42 0.00 0.00 25601.72 10922.67 45632.47 00:17:07.200 =================================================================================================================== 00:17:07.200 Total : 4971.52 19.42 0.00 0.00 25601.72 10922.67 45632.47 00:17:07.200 00:17:07.200 Latency(us) 00:17:07.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.200 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:07.200 Nvme1n1 : 1.01 7781.20 30.40 0.00 0.00 16381.45 7427.41 29515.47 00:17:07.200 =================================================================================================================== 00:17:07.200 Total : 7781.20 30.40 0.00 0.00 16381.45 7427.41 29515.47 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3561714 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3561717 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3561719 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.458 rmmod nvme_tcp 00:17:07.458 rmmod nvme_fabrics 00:17:07.458 rmmod nvme_keyring 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3561574 ']' 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3561574 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3561574 ']' 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3561574 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3561574 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3561574' 00:17:07.458 killing process with pid 3561574 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3561574 00:17:07.458 13:25:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3561574 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.716 13:25:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.621 13:25:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.621 00:17:09.621 real 0m7.177s 00:17:09.621 user 0m15.812s 00:17:09.621 sys 0m3.597s 00:17:09.621 13:25:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.621 13:25:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.621 ************************************ 00:17:09.621 END TEST nvmf_bdev_io_wait 00:17:09.621 ************************************ 00:17:09.881 13:25:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.881 13:25:07 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:09.881 13:25:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.881 13:25:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.881 13:25:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.881 ************************************ 00:17:09.881 START TEST nvmf_queue_depth 00:17:09.881 ************************************ 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:09.881 * Looking for test storage... 00:17:09.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.881 13:25:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:12.411 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:12.411 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.411 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:12.412 Found net devices under 0000:09:00.0: cvl_0_0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:12.412 Found net devices under 0000:09:00.1: cvl_0_1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:17:12.412 00:17:12.412 --- 10.0.0.2 ping statistics --- 00:17:12.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.412 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:17:12.412 00:17:12.412 --- 10.0.0.1 ping statistics --- 00:17:12.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.412 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3563942 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3563942 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3563942 ']' 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.412 [2024-07-12 13:25:09.545985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:12.412 [2024-07-12 13:25:09.546069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.412 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.412 [2024-07-12 13:25:09.582579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:12.412 [2024-07-12 13:25:09.608470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.412 [2024-07-12 13:25:09.691828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.412 [2024-07-12 13:25:09.691889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.412 [2024-07-12 13:25:09.691903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.412 [2024-07-12 13:25:09.691915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.412 [2024-07-12 13:25:09.691925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.412 [2024-07-12 13:25:09.691965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.412 [2024-07-12 13:25:09.835902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.412 Malloc0 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:12.412 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 [2024-07-12 13:25:09.901763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3563961 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3563961 /var/tmp/bdevperf.sock 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3563961 ']' 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.671 13:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.671 [2024-07-12 13:25:09.949517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:12.671 [2024-07-12 13:25:09.949599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563961 ] 00:17:12.671 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.671 [2024-07-12 13:25:09.980168] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:12.671 [2024-07-12 13:25:10.009801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.671 [2024-07-12 13:25:10.096293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.928 NVMe0n1 00:17:12.928 13:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.929 13:25:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.929 Running I/O for 10 seconds... 00:17:25.127 00:17:25.127 Latency(us) 00:17:25.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.127 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:25.127 Verification LBA range: start 0x0 length 0x4000 00:17:25.127 NVMe0n1 : 10.09 8787.36 34.33 0.00 0.00 115967.86 21651.15 70293.43 00:17:25.127 =================================================================================================================== 00:17:25.127 Total : 8787.36 34.33 0.00 0.00 115967.86 21651.15 70293.43 00:17:25.127 0 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3563961 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3563961 ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3563961 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3563961 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3563961' 00:17:25.127 killing process with pid 3563961 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3563961 00:17:25.127 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.127 00:17:25.127 Latency(us) 00:17:25.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.127 =================================================================================================================== 00:17:25.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3563961 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.127 rmmod nvme_tcp 00:17:25.127 rmmod nvme_fabrics 00:17:25.127 rmmod nvme_keyring 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3563942 ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3563942 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3563942 ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3563942 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3563942 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3563942' 00:17:25.127 killing process with pid 3563942 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3563942 00:17:25.127 13:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3563942 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.127 13:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.693 13:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.693 00:17:25.693 real 0m16.002s 00:17:25.693 user 0m22.262s 00:17:25.693 sys 0m3.130s 00:17:25.693 13:25:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.693 13:25:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.693 ************************************ 00:17:25.693 END TEST nvmf_queue_depth 00:17:25.693 ************************************ 00:17:25.950 13:25:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.950 13:25:23 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.950 13:25:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.950 13:25:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.950 13:25:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.950 ************************************ 00:17:25.950 START TEST nvmf_target_multipath 00:17:25.950 ************************************ 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:25.950 * Looking for test storage... 00:17:25.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.950 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.951 13:25:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:27.851 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:27.851 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:27.851 Found net devices under 0000:09:00.0: cvl_0_0 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.851 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:27.851 Found net devices under 0000:09:00.1: cvl_0_1 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.852 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:17:28.110 00:17:28.110 --- 10.0.0.2 ping statistics --- 00:17:28.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.110 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:17:28.110 00:17:28.110 --- 10.0.0.1 ping statistics --- 00:17:28.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.110 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:28.110 only one NIC for nvmf test 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.110 rmmod nvme_tcp 00:17:28.110 rmmod nvme_fabrics 00:17:28.110 rmmod nvme_keyring 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.110 13:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.676 00:17:30.676 real 0m4.417s 00:17:30.676 user 0m0.823s 00:17:30.676 sys 0m1.586s 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.676 13:25:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:30.676 ************************************ 00:17:30.676 END TEST nvmf_target_multipath 00:17:30.676 ************************************ 00:17:30.676 13:25:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:30.676 13:25:27 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:30.676 13:25:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:30.676 13:25:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.676 13:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.676 ************************************ 00:17:30.676 START TEST nvmf_zcopy 00:17:30.676 ************************************ 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:30.676 * Looking for test storage... 00:17:30.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.676 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.677 13:25:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:32.585 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:32.585 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:32.585 Found net devices under 0000:09:00.0: cvl_0_0 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:32.585 Found net devices under 0000:09:00.1: cvl_0_1 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.585 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:17:32.586 00:17:32.586 --- 10.0.0.2 ping statistics --- 00:17:32.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.586 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:17:32.586 00:17:32.586 --- 10.0.0.1 ping statistics --- 00:17:32.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.586 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3569125 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3569125 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3569125 ']' 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.586 13:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.586 [2024-07-12 13:25:29.863950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:32.586 [2024-07-12 13:25:29.864020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.586 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.586 [2024-07-12 13:25:29.898690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:32.586 [2024-07-12 13:25:29.926469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.586 [2024-07-12 13:25:30.016472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.586 [2024-07-12 13:25:30.016531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.586 [2024-07-12 13:25:30.016563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.586 [2024-07-12 13:25:30.016576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.586 [2024-07-12 13:25:30.016587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.586 [2024-07-12 13:25:30.016626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.844 [2024-07-12 13:25:30.160368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.844 [2024-07-12 13:25:30.176528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.844 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.845 malloc0 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:32.845 { 00:17:32.845 "params": { 00:17:32.845 "name": "Nvme$subsystem", 00:17:32.845 "trtype": "$TEST_TRANSPORT", 00:17:32.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:32.845 "adrfam": "ipv4", 00:17:32.845 "trsvcid": "$NVMF_PORT", 00:17:32.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:32.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:32.845 "hdgst": ${hdgst:-false}, 00:17:32.845 "ddgst": ${ddgst:-false} 00:17:32.845 }, 00:17:32.845 "method": "bdev_nvme_attach_controller" 00:17:32.845 } 00:17:32.845 EOF 00:17:32.845 )") 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:32.845 13:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:32.845 "params": { 00:17:32.845 "name": "Nvme1", 00:17:32.845 "trtype": "tcp", 00:17:32.845 "traddr": "10.0.0.2", 00:17:32.845 "adrfam": "ipv4", 00:17:32.845 "trsvcid": "4420", 00:17:32.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.845 "hdgst": false, 00:17:32.845 "ddgst": false 00:17:32.845 }, 00:17:32.845 "method": "bdev_nvme_attach_controller" 00:17:32.845 }' 00:17:32.845 [2024-07-12 13:25:30.260686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:32.845 [2024-07-12 13:25:30.260769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569152 ] 00:17:32.845 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.845 [2024-07-12 13:25:30.298518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:33.103 [2024-07-12 13:25:30.326486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.103 [2024-07-12 13:25:30.413537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.361 Running I/O for 10 seconds... 00:17:45.549 00:17:45.549 Latency(us) 00:17:45.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:45.549 Verification LBA range: start 0x0 length 0x1000 00:17:45.549 Nvme1n1 : 10.02 5930.78 46.33 0.00 0.00 21524.55 2318.03 32234.00 00:17:45.549 =================================================================================================================== 00:17:45.549 Total : 5930.78 46.33 0.00 0.00 21524.55 2318.03 32234.00 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3570349 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:45.549 { 00:17:45.549 "params": { 00:17:45.549 "name": "Nvme$subsystem", 00:17:45.549 "trtype": "$TEST_TRANSPORT", 00:17:45.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:45.549 "adrfam": "ipv4", 00:17:45.549 "trsvcid": "$NVMF_PORT", 00:17:45.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:45.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:45.549 "hdgst": ${hdgst:-false}, 00:17:45.549 "ddgst": ${ddgst:-false} 00:17:45.549 }, 00:17:45.549 "method": "bdev_nvme_attach_controller" 00:17:45.549 } 00:17:45.549 EOF 00:17:45.549 )") 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:45.549 [2024-07-12 13:25:41.037123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.037170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:45.549 13:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:45.549 "params": { 00:17:45.549 "name": "Nvme1", 00:17:45.549 "trtype": "tcp", 00:17:45.549 "traddr": "10.0.0.2", 00:17:45.549 "adrfam": "ipv4", 00:17:45.549 "trsvcid": "4420", 00:17:45.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.549 "hdgst": false, 00:17:45.549 "ddgst": false 00:17:45.549 }, 00:17:45.549 "method": "bdev_nvme_attach_controller" 00:17:45.549 }' 00:17:45.549 [2024-07-12 13:25:41.045081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.045105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.053118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.053141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.061119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.061140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.069142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.069162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.077161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.077181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.077368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:17:45.549 [2024-07-12 13:25:41.077444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570349 ] 00:17:45.549 [2024-07-12 13:25:41.085183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.085204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.093205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.093225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.101227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.101247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.549 [2024-07-12 13:25:41.109249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.109268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.112992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:45.549 [2024-07-12 13:25:41.117273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.117322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.125294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.125338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.133341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.133361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.141359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.549 [2024-07-12 13:25:41.141381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.549 [2024-07-12 13:25:41.141652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.549 [2024-07-12 13:25:41.149435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.149468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.157445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.157479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.165425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.165446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.173448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.173469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.181470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.181491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.189495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.189518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.197554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.197593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.205545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.205570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.213557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.213579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.221576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.221611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.229598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.229619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.232919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.550 [2024-07-12 13:25:41.237631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.237651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.245660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.245682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.253718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.253753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.261726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.261761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.269744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.269782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.277770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.277806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.285792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.285831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.293807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.293848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.301793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.301815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.309846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.309879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.317868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.317902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.325888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.325923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.333877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.333897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.341899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.341919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.349941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.349964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.357962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.357984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.365984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.366005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.374022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.374046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.382030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.382052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.390055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.390076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.398074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.398095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.406095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.406115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.414119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.414143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 Running I/O for 5 seconds... 00:17:45.550 [2024-07-12 13:25:41.422140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.422160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.436737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.436766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.447470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.447513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.458030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.458057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.468457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.468484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.479028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.479054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.490150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.490177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.501125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.501167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.512052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.512078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.522696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.522723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.533399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.533427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.544284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.544312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.556942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.556968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.567352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.567378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.550 [2024-07-12 13:25:41.578438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.550 [2024-07-12 13:25:41.578465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.589254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.589279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.599764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.599790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.610356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.610383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.623416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.623443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.632139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.632164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.645104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.645147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.656207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.656249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.669191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.669218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.679295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.679342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.690204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.690232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.700549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.700575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.711330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.711373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.724386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.724413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.734937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.734963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.745469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.745497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.757926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.757953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.768684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.768711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.779080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.779107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.788910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.788937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.800104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.800130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.811383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.811410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.822659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.822701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.834098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.834124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.844694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.844718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.855586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.855611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.866363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.866390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.876955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.876982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.887440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.887466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.900883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.900909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.911384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.911409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.922347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.922374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.932941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.932967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.944153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.944180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.957009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.957035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.966425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.966452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.977142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.977168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.987123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.987149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:41.998105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:41.998130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.008972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.008996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.018997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.019023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.030248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.030274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.040731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.040756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.051241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.051267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.061683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.061709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.072664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.072697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.083998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.084024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.094411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.094438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.105014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.105042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.551 [2024-07-12 13:25:42.115426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.551 [2024-07-12 13:25:42.115453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.129082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.129108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.138709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.138734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.150093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.150119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.161199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.161226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.171266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.171294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.182498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.182525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.193092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.193116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.203975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.204017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.215203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.215228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.226061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.226102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.237334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.237361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.248579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.248605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.258865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.258891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.270299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.270348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.280661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.280706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.292128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.292153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.303792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.303817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.314928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.314953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.325946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.325973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.338952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.338978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.350181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.350208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.361825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.361851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.372997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.373023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.383805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.383830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.394510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.394536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.405470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.405497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.416764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.416789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.429551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.429578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.440748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.440774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.451957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.451984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.462886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.462911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.473496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.473524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.484430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.484471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.495126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.495157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.506240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.506264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.516673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.516697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.528114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.528140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.538245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.538269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.549518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.549542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.560137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.560178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.571161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.571185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.582034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.582073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.592864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.592891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.603945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.603970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.614736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.614761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.627470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.627497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.638122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.638162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.552 [2024-07-12 13:25:42.649073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.552 [2024-07-12 13:25:42.649099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.661484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.661511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.672290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.672325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.682923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.682948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.694050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.694074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.704641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.704689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.716042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.716066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.726606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.726631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.737714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.737755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.748091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.748116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.758733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.758758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.769733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.769760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.780588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.780629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.793839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.793865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.805326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.805352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.814880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.814904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.825459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.825486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.836501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.836529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.847515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.847542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.858827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.858851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.869991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.870016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.880556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.880583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.891017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.891055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.901071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.901096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.912606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.912649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.923876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.923902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.934935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.934961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.945136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.945162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.956278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.956328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.967242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.967268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.978366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.978393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:42.988522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:42.988548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:43.000015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:43.000041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.553 [2024-07-12 13:25:43.012643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.553 [2024-07-12 13:25:43.012669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.023168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.023194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.034767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.034793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.045618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.045643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.056756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.056780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.067890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.067917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.080545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.080572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.089498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.089540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.100891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.100916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.111523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.111548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.122502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.122527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.811 [2024-07-12 13:25:43.133354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.811 [2024-07-12 13:25:43.133380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.143160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.143186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.154595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.154623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.165205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.165230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.175499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.175525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.189516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.189544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.199138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.199164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.209999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.210023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.220832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.220858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.232636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.232675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.242992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.243016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.254054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.254078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.266537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.266564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.812 [2024-07-12 13:25:43.276556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.812 [2024-07-12 13:25:43.276583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.289054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.289082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.299693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.299718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.310844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.310871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.323419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.323446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.334891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.334918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.345820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.345845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.357763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.357790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.369998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.370025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.380167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.380194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.391870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.391897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.403033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.403058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.413924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.413963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.424813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.424838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.436150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.436178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.446954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.446981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.460530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.460557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.470494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.470521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.481358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.481386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.491494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.491521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.502602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.502643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.513239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.513264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.524202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.524228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.069 [2024-07-12 13:25:43.537549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.069 [2024-07-12 13:25:43.537576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.548557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.548585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.559569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.559596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.570167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.570193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.580662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.580702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.591960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.591986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.602665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.602692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.615735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.615761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.625621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.625648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.636218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.636244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.649246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.649272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.660632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.660659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.671419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.671460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.684012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.684038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.694842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.694881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.705800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.705826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.715713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.715737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.726891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.327 [2024-07-12 13:25:43.726918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.327 [2024-07-12 13:25:43.739653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.739694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.328 [2024-07-12 13:25:43.749537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.749584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.328 [2024-07-12 13:25:43.760927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.760953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.328 [2024-07-12 13:25:43.771667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.771691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.328 [2024-07-12 13:25:43.782936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.782962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.328 [2024-07-12 13:25:43.793683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.328 [2024-07-12 13:25:43.793709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.805058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.805100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.817669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.817695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.828205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.828232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.839327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.839368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.850084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.850111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.860770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.860811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.873308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.873347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.884056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.884081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.895192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.895216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.908042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.908069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.917352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.917378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.929720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.929747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.941183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.941210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.953826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.953852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.963017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.963048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.975874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.975900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:43.987044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:43.987069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:44.000761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:44.000788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:44.012679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:44.012707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:44.023564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:44.023592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:44.033841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:44.033865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-12 13:25:44.047560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-12 13:25:44.047589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.058714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.058742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.070197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.070225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.080925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.080953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.093757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.093785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.104524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.104550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.115881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.115908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.126843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.126882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.139840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.139866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.150632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.150657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.161603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.161638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.171966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.171991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.183265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.183301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.193533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.193559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.205273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.205299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.216397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.216424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.226862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.226886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.238261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.238286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.248640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.248665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.259449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.259474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.271273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.271313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.282044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.282070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.292741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.292768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.305397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.305425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-12 13:25:44.316167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-12 13:25:44.316195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.327230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.327258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.340197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.340224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.350830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.350856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.362019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.362060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.373388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.373416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.386485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.386512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.397248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.397281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.408481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.408508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.101 [2024-07-12 13:25:44.419404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.101 [2024-07-12 13:25:44.419432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.430429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.430456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.442491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.442519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.453589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.453645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.467616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.467645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.478796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.478821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.489777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.489803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.500982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.501007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.511841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.511865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.524735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.524761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.534941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.534966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.546090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.546118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.557373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.557400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-12 13:25:44.568860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-12 13:25:44.568887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.579384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.579412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.590121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.590146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.600872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.600897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.613325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.613351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.623785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.623812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.635641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.635668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.646278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.646329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.658786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.658813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.668646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.668688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.679957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.679982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.690552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.690591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.701226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.701252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.711965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.711990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.723153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.723181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.735560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.735587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.745425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.745451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.756798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.756824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.768196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.768222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.778886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.778910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.790131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.790156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.800872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.800912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.811779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.811819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.360 [2024-07-12 13:25:44.822793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.360 [2024-07-12 13:25:44.822818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.834533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.834569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.846178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.846202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.859079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.859105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.869415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.869441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.880581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.880608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.891434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.891459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.902099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.902138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.912173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.912198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.923477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.923504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.934900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.934924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.945887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.945913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.956941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.956968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.969111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.969137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.980027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.980054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:44.990816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:44.990842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:45.003350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:45.003377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:45.012499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.618 [2024-07-12 13:25:45.012526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.618 [2024-07-12 13:25:45.025054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.025081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.036418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.036445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.047751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.047792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.057595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.057636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.069019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.069045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.079412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.079439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.619 [2024-07-12 13:25:45.090176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.619 [2024-07-12 13:25:45.090205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.103099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.103126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.114118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.114144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.125496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.125524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.136168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.136195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.146887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.146912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.157470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.157496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.167549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.167576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.178795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.178822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.189522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.189550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.200393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.200420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.210845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.210872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.221615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.221656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.232437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.232464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.242727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.242754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.253052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.253077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.263401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.263443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.274492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.274533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.285236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.285276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.295770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.295795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.306538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.306565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.317412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.317438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.328223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.328249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.877 [2024-07-12 13:25:45.338766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.877 [2024-07-12 13:25:45.338792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.349708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.349750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.361206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.361231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.372145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.372170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.385420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.385447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.396337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.396363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.407445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.407486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.418714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.418741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.429236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.429263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.440236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.440284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.451334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.451361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.462208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.462235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.473195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.473223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.483433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.483460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.494579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.494606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.505748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.505787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.516762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.516787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.527197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.527223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.537874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.537901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.548754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.548780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.559556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.559583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.570176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.570203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.583494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.583522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.592287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.592336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.136 [2024-07-12 13:25:45.605671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.136 [2024-07-12 13:25:45.605698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.616646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.616675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.627454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.627481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.638106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.638133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.648435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.648470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.658755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.658780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.670029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.670055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.680022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.680048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.691433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.691460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.701848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.701874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.712524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.712551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.724756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.724782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.736156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.736198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.746808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.746836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.758131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.758156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.768979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.769005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.780312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.780344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.790671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.790710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.801147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.801175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.811682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.811707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.824443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.824470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.835035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.835061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.845105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.845131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.395 [2024-07-12 13:25:45.856343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.395 [2024-07-12 13:25:45.856375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.867340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.867368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.878106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.878133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.889271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.889310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.905658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.905686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.914285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.914332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.927669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.927695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.938959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.938986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.653 [2024-07-12 13:25:45.949991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.653 [2024-07-12 13:25:45.950016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:45.961248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:45.961274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:45.972123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:45.972147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:45.982813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:45.982838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:45.993803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:45.993829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.005004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.005031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.015130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.015155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.026347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.026374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.037007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.037047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.047828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.047853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.058513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.058538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.069524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.069557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.081841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.081868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.092803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.092829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.103331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.103356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.113534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.113561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.654 [2024-07-12 13:25:46.124995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.654 [2024-07-12 13:25:46.125035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.137371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.137424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.149016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.149042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.158993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.159018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.170224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.170264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.181673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.181713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.194845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.194871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.203539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.203564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.216140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.216166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.227679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.227718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.238556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.238583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.249589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.249615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.260062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.260087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.270934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.270960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.283789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.283816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.292805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.292831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.305966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.305991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.316546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.316572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.326887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.326913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.337886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.337911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.348425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.348451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.361286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.361336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.372523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.372550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.912 [2024-07-12 13:25:46.383628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:48.912 [2024-07-12 13:25:46.383656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.395087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.395130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.405686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.405725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.416041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.416080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.427111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.427139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.437728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.437754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 00:17:49.170 Latency(us) 00:17:49.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.170 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:49.170 Nvme1n1 : 5.01 11627.93 90.84 0.00 0.00 10990.32 4102.07 24660.95 00:17:49.170 =================================================================================================================== 00:17:49.170 Total : 11627.93 90.84 0.00 0.00 10990.32 4102.07 24660.95 00:17:49.170 [2024-07-12 13:25:46.445466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.445491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.453535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.453561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.461577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.461612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.469640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.469685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.477656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.477703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.485674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.485719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.493693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.493739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.501721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.501766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.509740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.509785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.517761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.517807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.170 [2024-07-12 13:25:46.525780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.170 [2024-07-12 13:25:46.525826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.533796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.533843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.541827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.541871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.549850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.549895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.557866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.557910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.565884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.565929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.573901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.573947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.581930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.581970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.589898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.589922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.597943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.597975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.606006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.606052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.614013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.614059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.622009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.622043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.630007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.630030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.171 [2024-07-12 13:25:46.638091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.171 [2024-07-12 13:25:46.638135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.428 [2024-07-12 13:25:46.646131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.428 [2024-07-12 13:25:46.646185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.428 [2024-07-12 13:25:46.654107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.428 [2024-07-12 13:25:46.654143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.428 [2024-07-12 13:25:46.662087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.428 [2024-07-12 13:25:46.662107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.428 [2024-07-12 13:25:46.670108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:49.428 [2024-07-12 13:25:46.670129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3570349) - No such process 00:17:49.428 13:25:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3570349 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.429 delay0 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.429 13:25:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:49.429 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.429 [2024-07-12 13:25:46.786519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:57.559 Initializing NVMe Controllers 00:17:57.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:57.559 Initialization complete. Launching workers. 00:17:57.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 299, failed: 10870 00:17:57.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11114, failed to submit 55 00:17:57.559 success 10966, unsuccess 148, failed 0 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.559 13:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.559 rmmod nvme_tcp 00:17:57.559 rmmod nvme_fabrics 00:17:57.559 rmmod nvme_keyring 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3569125 ']' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3569125 ']' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3569125' 00:17:57.559 killing process with pid 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3569125 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.559 13:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.932 13:25:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.932 00:17:58.932 real 0m28.670s 00:17:58.932 user 0m38.861s 00:17:58.932 sys 0m9.838s 00:17:58.932 13:25:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.932 13:25:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 ************************************ 00:17:58.932 END TEST nvmf_zcopy 00:17:58.932 ************************************ 00:17:58.932 13:25:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.932 13:25:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:58.932 13:25:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.932 13:25:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.932 13:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 ************************************ 00:17:58.932 START TEST nvmf_nmic 00:17:58.932 ************************************ 00:17:58.932 13:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:59.190 * Looking for test storage... 00:17:59.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.190 13:25:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.191 13:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.091 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:01.092 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:01.092 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:01.092 Found net devices under 0000:09:00.0: cvl_0_0 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:01.092 Found net devices under 0000:09:00.1: cvl_0_1 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.092 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:01.354 00:18:01.354 --- 10.0.0.2 ping statistics --- 00:18:01.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.354 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:18:01.354 00:18:01.354 --- 10.0.0.1 ping statistics --- 00:18:01.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.354 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3573846 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3573846 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3573846 ']' 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.354 13:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.354 [2024-07-12 13:25:58.745202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:18:01.354 [2024-07-12 13:25:58.745288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.354 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.354 [2024-07-12 13:25:58.781985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:01.354 [2024-07-12 13:25:58.807975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.611 [2024-07-12 13:25:58.894226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.611 [2024-07-12 13:25:58.894273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.611 [2024-07-12 13:25:58.894301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.611 [2024-07-12 13:25:58.894312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.611 [2024-07-12 13:25:58.894330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.611 [2024-07-12 13:25:58.894381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.611 [2024-07-12 13:25:58.894495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.611 [2024-07-12 13:25:58.894562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.611 [2024-07-12 13:25:58.894565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.611 [2024-07-12 13:25:59.046949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.611 Malloc0 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.611 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 [2024-07-12 13:25:59.100579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:01.868 test case1: single bdev can't be used in multiple subsystems 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 [2024-07-12 13:25:59.124444] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:01.868 [2024-07-12 13:25:59.124475] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:01.868 [2024-07-12 13:25:59.124490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.868 request: 00:18:01.868 { 00:18:01.868 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:01.868 "namespace": { 00:18:01.868 "bdev_name": "Malloc0", 00:18:01.868 "no_auto_visible": false 00:18:01.868 }, 00:18:01.868 "method": "nvmf_subsystem_add_ns", 00:18:01.868 "req_id": 1 00:18:01.868 } 00:18:01.868 Got JSON-RPC error response 00:18:01.868 response: 00:18:01.868 { 00:18:01.868 "code": -32602, 00:18:01.868 "message": "Invalid parameters" 00:18:01.868 } 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:01.868 Adding namespace failed - expected result. 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:01.868 test case2: host connect to nvmf target in multiple paths 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:01.868 [2024-07-12 13:25:59.132542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.868 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:02.433 13:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:02.998 13:26:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.998 13:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:02.998 13:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.998 13:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:02.998 13:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:04.894 13:26:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:04.894 [global] 00:18:04.894 thread=1 00:18:04.894 invalidate=1 00:18:04.894 rw=write 00:18:04.894 time_based=1 00:18:04.894 runtime=1 00:18:04.894 ioengine=libaio 00:18:04.894 direct=1 00:18:04.894 bs=4096 00:18:04.894 iodepth=1 00:18:04.894 norandommap=0 00:18:04.894 numjobs=1 00:18:04.894 00:18:05.152 verify_dump=1 00:18:05.152 verify_backlog=512 00:18:05.152 verify_state_save=0 00:18:05.152 do_verify=1 00:18:05.152 verify=crc32c-intel 00:18:05.152 [job0] 00:18:05.152 filename=/dev/nvme0n1 00:18:05.152 Could not set queue depth (nvme0n1) 00:18:05.152 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.152 fio-3.35 00:18:05.152 Starting 1 thread 00:18:06.546 00:18:06.546 job0: (groupid=0, jobs=1): err= 0: pid=3574361: Fri Jul 12 13:26:03 2024 00:18:06.546 read: IOPS=432, BW=1731KiB/s (1773kB/s)(1740KiB/1005msec) 00:18:06.546 slat (nsec): min=7059, max=61106, avg=27108.04, stdev=10449.81 00:18:06.546 clat (usec): min=315, max=41010, avg=1967.51, stdev=7622.11 00:18:06.546 lat (usec): min=323, max=41029, avg=1994.62, stdev=7622.66 00:18:06.546 clat percentiles (usec): 00:18:06.546 | 1.00th=[ 326], 5.00th=[ 347], 10.00th=[ 371], 20.00th=[ 429], 00:18:06.546 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 494], 60.00th=[ 502], 00:18:06.546 | 70.00th=[ 519], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 594], 00:18:06.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:06.546 | 99.99th=[41157] 00:18:06.546 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:18:06.546 slat (usec): min=6, max=29649, avg=71.51, stdev=1309.76 00:18:06.547 clat (usec): min=161, max=367, avg=184.34, stdev=12.76 00:18:06.547 lat (usec): min=169, max=29876, avg=255.85, stdev=1311.72 00:18:06.547 clat percentiles (usec): 00:18:06.547 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 176], 00:18:06.547 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:18:06.547 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 202], 00:18:06.547 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 367], 99.95th=[ 367], 00:18:06.547 | 99.99th=[ 367] 00:18:06.547 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:06.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:06.547 lat (usec) : 250=53.96%, 500=26.50%, 750=17.85% 00:18:06.547 lat (msec) : 50=1.69% 00:18:06.547 cpu : usr=1.10%, sys=1.79%, ctx=949, majf=0, minf=2 00:18:06.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.547 issued rwts: total=435,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.547 00:18:06.547 Run status group 0 (all jobs): 00:18:06.547 READ: bw=1731KiB/s (1773kB/s), 1731KiB/s-1731KiB/s (1773kB/s-1773kB/s), io=1740KiB (1782kB), run=1005-1005msec 00:18:06.547 WRITE: bw=2038KiB/s (2087kB/s), 2038KiB/s-2038KiB/s (2087kB/s-2087kB/s), io=2048KiB (2097kB), run=1005-1005msec 00:18:06.547 00:18:06.547 Disk stats (read/write): 00:18:06.547 nvme0n1: ios=458/512, merge=0/0, ticks=1684/90, in_queue=1774, util=98.70% 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.547 rmmod nvme_tcp 00:18:06.547 rmmod nvme_fabrics 00:18:06.547 rmmod nvme_keyring 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3573846 ']' 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3573846 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3573846 ']' 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3573846 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3573846 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3573846' 00:18:06.547 killing process with pid 3573846 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3573846 00:18:06.547 13:26:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3573846 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.807 13:26:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.341 13:26:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.341 00:18:09.341 real 0m9.876s 00:18:09.341 user 0m22.089s 00:18:09.341 sys 0m2.385s 00:18:09.341 13:26:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.341 13:26:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:09.341 ************************************ 00:18:09.341 END TEST nvmf_nmic 00:18:09.341 ************************************ 00:18:09.341 13:26:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.341 13:26:06 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:09.341 13:26:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.341 13:26:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.341 13:26:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.341 ************************************ 00:18:09.341 START TEST nvmf_fio_target 00:18:09.341 ************************************ 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:09.341 * Looking for test storage... 00:18:09.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.341 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.342 13:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:11.242 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:11.242 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.242 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:11.243 Found net devices under 0000:09:00.0: cvl_0_0 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:11.243 Found net devices under 0000:09:00.1: cvl_0_1 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:11.243 00:18:11.243 --- 10.0.0.2 ping statistics --- 00:18:11.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.243 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:11.243 00:18:11.243 --- 10.0.0.1 ping statistics --- 00:18:11.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.243 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3576535 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3576535 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3576535 ']' 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.243 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.243 [2024-07-12 13:26:08.631716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:18:11.243 [2024-07-12 13:26:08.631798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.243 [2024-07-12 13:26:08.667038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:11.243 [2024-07-12 13:26:08.692525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.501 [2024-07-12 13:26:08.775694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.501 [2024-07-12 13:26:08.775746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.501 [2024-07-12 13:26:08.775774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.501 [2024-07-12 13:26:08.775785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.501 [2024-07-12 13:26:08.775795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.501 [2024-07-12 13:26:08.775876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.501 [2024-07-12 13:26:08.775941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.501 [2024-07-12 13:26:08.776007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.501 [2024-07-12 13:26:08.776019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.501 13:26:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.759 [2024-07-12 13:26:09.165922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.759 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.017 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:12.017 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.274 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:12.274 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.532 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:12.532 13:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:12.789 13:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:12.789 13:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:13.045 13:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.337 13:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:13.337 13:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.594 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:13.594 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:13.851 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:13.851 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:14.108 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:14.365 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:14.365 13:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.623 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:14.623 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:14.880 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.137 [2024-07-12 13:26:12.482244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.137 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:15.395 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:15.651 13:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:16.215 13:26:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:18.739 13:26:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:18.739 [global] 00:18:18.739 thread=1 00:18:18.739 invalidate=1 00:18:18.739 rw=write 00:18:18.739 time_based=1 00:18:18.739 runtime=1 00:18:18.739 ioengine=libaio 00:18:18.739 direct=1 00:18:18.739 bs=4096 00:18:18.739 iodepth=1 00:18:18.739 norandommap=0 00:18:18.739 numjobs=1 00:18:18.739 00:18:18.739 verify_dump=1 00:18:18.739 verify_backlog=512 00:18:18.739 verify_state_save=0 00:18:18.739 do_verify=1 00:18:18.739 verify=crc32c-intel 00:18:18.739 [job0] 00:18:18.739 filename=/dev/nvme0n1 00:18:18.739 [job1] 00:18:18.740 filename=/dev/nvme0n2 00:18:18.740 [job2] 00:18:18.740 filename=/dev/nvme0n3 00:18:18.740 [job3] 00:18:18.740 filename=/dev/nvme0n4 00:18:18.740 Could not set queue depth (nvme0n1) 00:18:18.740 Could not set queue depth (nvme0n2) 00:18:18.740 Could not set queue depth (nvme0n3) 00:18:18.740 Could not set queue depth (nvme0n4) 00:18:18.740 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.740 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.740 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.740 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:18.740 fio-3.35 00:18:18.740 Starting 4 threads 00:18:19.672 00:18:19.672 job0: (groupid=0, jobs=1): err= 0: pid=3577505: Fri Jul 12 13:26:17 2024 00:18:19.672 read: IOPS=1451, BW=5806KiB/s (5946kB/s)(5812KiB/1001msec) 00:18:19.672 slat (nsec): min=5617, max=56320, avg=13201.99, stdev=6248.51 00:18:19.672 clat (usec): min=268, max=1782, avg=385.47, stdev=76.91 00:18:19.672 lat (usec): min=289, max=1789, avg=398.67, stdev=77.74 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:18:19.672 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 383], 60.00th=[ 408], 00:18:19.672 | 70.00th=[ 424], 80.00th=[ 437], 90.00th=[ 478], 95.00th=[ 506], 00:18:19.672 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 988], 99.95th=[ 1778], 00:18:19.672 | 99.99th=[ 1778] 00:18:19.672 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:19.672 slat (nsec): min=6845, max=75721, avg=14613.42, stdev=9250.03 00:18:19.672 clat (usec): min=185, max=2448, avg=251.11, stdev=80.84 00:18:19.672 lat (usec): min=193, max=2457, avg=265.72, stdev=84.02 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:18:19.672 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:18:19.672 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 343], 95.00th=[ 392], 00:18:19.672 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 791], 99.95th=[ 2442], 00:18:19.672 | 99.99th=[ 2442] 00:18:19.672 bw ( KiB/s): min= 8192, max= 8192, per=37.91%, avg=8192.00, stdev= 0.00, samples=1 00:18:19.672 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:19.672 lat (usec) : 250=38.17%, 500=58.92%, 750=2.78%, 1000=0.07% 00:18:19.672 lat (msec) : 2=0.03%, 4=0.03% 00:18:19.672 cpu : usr=3.00%, sys=6.20%, ctx=2990, majf=0, minf=1 00:18:19.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.672 issued rwts: total=1453,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.672 job1: (groupid=0, jobs=1): err= 0: pid=3577506: Fri Jul 12 13:26:17 2024 00:18:19.672 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:19.672 slat (nsec): min=5637, max=46682, avg=13046.43, stdev=6089.72 00:18:19.672 clat (usec): min=262, max=3925, avg=317.05, stdev=94.48 00:18:19.672 lat (usec): min=269, max=3931, avg=330.10, stdev=94.66 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:18:19.672 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:18:19.672 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 343], 00:18:19.672 | 99.00th=[ 375], 99.50th=[ 441], 99.90th=[ 553], 99.95th=[ 3916], 00:18:19.672 | 99.99th=[ 3916] 00:18:19.672 write: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec); 0 zone resets 00:18:19.672 slat (usec): min=7, max=21253, avg=26.15, stdev=486.03 00:18:19.672 clat (usec): min=175, max=1288, avg=224.77, stdev=41.01 00:18:19.672 lat (usec): min=184, max=21585, avg=250.92, stdev=490.18 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:18:19.672 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:18:19.672 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 285], 00:18:19.672 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 857], 99.95th=[ 1287], 00:18:19.672 | 99.99th=[ 1287] 00:18:19.672 bw ( KiB/s): min= 8192, max= 8192, per=37.91%, avg=8192.00, stdev= 0.00, samples=1 00:18:19.672 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:19.672 lat (usec) : 250=49.13%, 500=50.70%, 750=0.09%, 1000=0.03% 00:18:19.672 lat (msec) : 2=0.03%, 4=0.03% 00:18:19.672 cpu : usr=3.90%, sys=6.40%, ctx=3448, majf=0, minf=1 00:18:19.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.672 issued rwts: total=1536,1910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.672 job2: (groupid=0, jobs=1): err= 0: pid=3577507: Fri Jul 12 13:26:17 2024 00:18:19.672 read: IOPS=1188, BW=4755KiB/s (4869kB/s)(4836KiB/1017msec) 00:18:19.672 slat (nsec): min=5849, max=42006, avg=13743.94, stdev=6439.95 00:18:19.672 clat (usec): min=322, max=41291, avg=493.21, stdev=1661.90 00:18:19.672 lat (usec): min=331, max=41307, avg=506.95, stdev=1662.04 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 375], 00:18:19.672 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 445], 00:18:19.672 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 498], 95.00th=[ 515], 00:18:19.672 | 99.00th=[ 570], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:18:19.672 | 99.99th=[41157] 00:18:19.672 write: IOPS=1510, BW=6041KiB/s (6186kB/s)(6144KiB/1017msec); 0 zone resets 00:18:19.672 slat (nsec): min=7179, max=60006, avg=13611.68, stdev=6874.84 00:18:19.672 clat (usec): min=187, max=504, avg=242.05, stdev=34.90 00:18:19.672 lat (usec): min=198, max=514, avg=255.66, stdev=36.64 00:18:19.672 clat percentiles (usec): 00:18:19.672 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:18:19.672 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:18:19.672 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 306], 00:18:19.672 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 490], 99.95th=[ 506], 00:18:19.672 | 99.99th=[ 506] 00:18:19.672 bw ( KiB/s): min= 4096, max= 8192, per=28.43%, avg=6144.00, stdev=2896.31, samples=2 00:18:19.673 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:19.673 lat (usec) : 250=41.06%, 500=54.61%, 750=4.26% 00:18:19.673 lat (msec) : 50=0.07% 00:18:19.673 cpu : usr=3.15%, sys=4.92%, ctx=2746, majf=0, minf=1 00:18:19.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.673 issued rwts: total=1209,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.673 job3: (groupid=0, jobs=1): err= 0: pid=3577508: Fri Jul 12 13:26:17 2024 00:18:19.673 read: IOPS=40, BW=163KiB/s (167kB/s)(164KiB/1006msec) 00:18:19.673 slat (nsec): min=6642, max=33180, avg=12574.22, stdev=5765.28 00:18:19.673 clat (usec): min=363, max=42062, avg=20770.16, stdev=20338.13 00:18:19.673 lat (usec): min=378, max=42076, avg=20782.74, stdev=20339.72 00:18:19.673 clat percentiles (usec): 00:18:19.673 | 1.00th=[ 363], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 429], 00:18:19.673 | 30.00th=[ 474], 40.00th=[ 545], 50.00th=[19792], 60.00th=[41157], 00:18:19.673 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:18:19.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:19.673 | 99.99th=[42206] 00:18:19.673 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:19.673 slat (usec): min=5, max=23146, avg=55.99, stdev=1022.47 00:18:19.673 clat (usec): min=177, max=656, avg=242.25, stdev=42.84 00:18:19.673 lat (usec): min=185, max=23424, avg=298.23, stdev=1024.98 00:18:19.673 clat percentiles (usec): 00:18:19.673 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:18:19.673 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:18:19.673 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 318], 00:18:19.673 | 99.00th=[ 392], 99.50th=[ 441], 99.90th=[ 660], 99.95th=[ 660], 00:18:19.673 | 99.99th=[ 660] 00:18:19.673 bw ( KiB/s): min= 4096, max= 4096, per=18.96%, avg=4096.00, stdev= 0.00, samples=1 00:18:19.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:19.673 lat (usec) : 250=63.83%, 500=31.28%, 750=1.08% 00:18:19.673 lat (msec) : 20=0.18%, 50=3.62% 00:18:19.673 cpu : usr=0.30%, sys=0.60%, ctx=555, majf=0, minf=2 00:18:19.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.673 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.673 00:18:19.673 Run status group 0 (all jobs): 00:18:19.673 READ: bw=16.3MiB/s (17.1MB/s), 163KiB/s-6138KiB/s (167kB/s-6285kB/s), io=16.6MiB (17.4MB), run=1001-1017msec 00:18:19.673 WRITE: bw=21.1MiB/s (22.1MB/s), 2036KiB/s-7632KiB/s (2085kB/s-7816kB/s), io=21.5MiB (22.5MB), run=1001-1017msec 00:18:19.673 00:18:19.673 Disk stats (read/write): 00:18:19.673 nvme0n1: ios=1075/1536, merge=0/0, ticks=436/360, in_queue=796, util=87.47% 00:18:19.673 nvme0n2: ios=1400/1536, merge=0/0, ticks=637/337, in_queue=974, util=91.35% 00:18:19.673 nvme0n3: ios=1081/1365, merge=0/0, ticks=546/317, in_queue=863, util=95.31% 00:18:19.673 nvme0n4: ios=86/512, merge=0/0, ticks=930/115, in_queue=1045, util=95.59% 00:18:19.673 13:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:19.673 [global] 00:18:19.673 thread=1 00:18:19.673 invalidate=1 00:18:19.673 rw=randwrite 00:18:19.673 time_based=1 00:18:19.673 runtime=1 00:18:19.673 ioengine=libaio 00:18:19.673 direct=1 00:18:19.673 bs=4096 00:18:19.673 iodepth=1 00:18:19.673 norandommap=0 00:18:19.673 numjobs=1 00:18:19.673 00:18:19.673 verify_dump=1 00:18:19.673 verify_backlog=512 00:18:19.673 verify_state_save=0 00:18:19.673 do_verify=1 00:18:19.673 verify=crc32c-intel 00:18:19.673 [job0] 00:18:19.673 filename=/dev/nvme0n1 00:18:19.673 [job1] 00:18:19.673 filename=/dev/nvme0n2 00:18:19.673 [job2] 00:18:19.673 filename=/dev/nvme0n3 00:18:19.673 [job3] 00:18:19.673 filename=/dev/nvme0n4 00:18:19.673 Could not set queue depth (nvme0n1) 00:18:19.673 Could not set queue depth (nvme0n2) 00:18:19.673 Could not set queue depth (nvme0n3) 00:18:19.673 Could not set queue depth (nvme0n4) 00:18:19.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.930 fio-3.35 00:18:19.930 Starting 4 threads 00:18:21.301 00:18:21.301 job0: (groupid=0, jobs=1): err= 0: pid=3577736: Fri Jul 12 13:26:18 2024 00:18:21.301 read: IOPS=290, BW=1161KiB/s (1188kB/s)(1200KiB/1034msec) 00:18:21.301 slat (nsec): min=6241, max=59729, avg=29411.27, stdev=9770.35 00:18:21.301 clat (usec): min=324, max=41041, avg=2775.61, stdev=9370.22 00:18:21.301 lat (usec): min=339, max=41067, avg=2805.02, stdev=9369.78 00:18:21.301 clat percentiles (usec): 00:18:21.301 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 400], 20.00th=[ 424], 00:18:21.301 | 30.00th=[ 445], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 486], 00:18:21.301 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[ 652], 95.00th=[41157], 00:18:21.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:21.301 | 99.99th=[41157] 00:18:21.301 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:18:21.301 slat (nsec): min=6143, max=31715, avg=12049.52, stdev=5026.76 00:18:21.301 clat (usec): min=165, max=1107, avg=354.29, stdev=146.69 00:18:21.301 lat (usec): min=174, max=1117, avg=366.34, stdev=150.33 00:18:21.301 clat percentiles (usec): 00:18:21.301 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:18:21.301 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 260], 60.00th=[ 404], 00:18:21.301 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 586], 00:18:21.301 | 99.00th=[ 693], 99.50th=[ 766], 99.90th=[ 1106], 99.95th=[ 1106], 00:18:21.301 | 99.99th=[ 1106] 00:18:21.301 bw ( KiB/s): min= 4096, max= 4096, per=23.71%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.301 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.301 lat (usec) : 250=28.45%, 500=45.32%, 750=23.15%, 1000=0.74% 00:18:21.301 lat (msec) : 2=0.25%, 50=2.09% 00:18:21.301 cpu : usr=0.58%, sys=1.65%, ctx=813, majf=0, minf=1 00:18:21.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.301 issued rwts: total=300,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.301 job1: (groupid=0, jobs=1): err= 0: pid=3577737: Fri Jul 12 13:26:18 2024 00:18:21.301 read: IOPS=1056, BW=4228KiB/s (4329kB/s)(4232KiB/1001msec) 00:18:21.301 slat (nsec): min=6405, max=65363, avg=27043.82, stdev=9823.56 00:18:21.301 clat (usec): min=321, max=41020, avg=504.65, stdev=1248.32 00:18:21.302 lat (usec): min=334, max=41038, avg=531.69, stdev=1248.17 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 416], 20.00th=[ 437], 00:18:21.302 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:18:21.302 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 553], 00:18:21.302 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 1450], 99.95th=[41157], 00:18:21.302 | 99.99th=[41157] 00:18:21.302 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:21.302 slat (nsec): min=5676, max=63783, avg=15209.98, stdev=9965.68 00:18:21.302 clat (usec): min=185, max=1547, avg=259.98, stdev=87.18 00:18:21.302 lat (usec): min=194, max=1554, avg=275.19, stdev=91.09 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:18:21.302 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:18:21.302 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 371], 95.00th=[ 408], 00:18:21.302 | 99.00th=[ 453], 99.50th=[ 676], 99.90th=[ 1254], 99.95th=[ 1549], 00:18:21.302 | 99.99th=[ 1549] 00:18:21.302 bw ( KiB/s): min= 6000, max= 6000, per=34.74%, avg=6000.00, stdev= 0.00, samples=1 00:18:21.302 iops : min= 1500, max= 1500, avg=1500.00, stdev= 0.00, samples=1 00:18:21.302 lat (usec) : 250=39.40%, 500=53.39%, 750=6.82%, 1000=0.04% 00:18:21.302 lat (msec) : 2=0.31%, 50=0.04% 00:18:21.302 cpu : usr=2.70%, sys=5.60%, ctx=2594, majf=0, minf=1 00:18:21.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 issued rwts: total=1058,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.302 job2: (groupid=0, jobs=1): err= 0: pid=3577738: Fri Jul 12 13:26:18 2024 00:18:21.302 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:21.302 slat (nsec): min=6068, max=67362, avg=29401.91, stdev=9780.02 00:18:21.302 clat (usec): min=315, max=895, avg=486.82, stdev=87.53 00:18:21.302 lat (usec): min=329, max=930, avg=516.23, stdev=90.10 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 330], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 412], 00:18:21.302 | 30.00th=[ 441], 40.00th=[ 461], 50.00th=[ 478], 60.00th=[ 498], 00:18:21.302 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 652], 00:18:21.302 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 898], 00:18:21.302 | 99.99th=[ 898] 00:18:21.302 write: IOPS=1170, BW=4683KiB/s (4796kB/s)(4688KiB/1001msec); 0 zone resets 00:18:21.302 slat (nsec): min=6636, max=61471, avg=22661.08, stdev=13295.50 00:18:21.302 clat (usec): min=188, max=756, avg=366.30, stdev=117.74 00:18:21.302 lat (usec): min=196, max=773, avg=388.96, stdev=122.50 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 235], 00:18:21.302 | 30.00th=[ 251], 40.00th=[ 318], 50.00th=[ 383], 60.00th=[ 420], 00:18:21.302 | 70.00th=[ 449], 80.00th=[ 474], 90.00th=[ 515], 95.00th=[ 553], 00:18:21.302 | 99.00th=[ 627], 99.50th=[ 676], 99.90th=[ 750], 99.95th=[ 758], 00:18:21.302 | 99.99th=[ 758] 00:18:21.302 bw ( KiB/s): min= 4400, max= 4400, per=25.47%, avg=4400.00, stdev= 0.00, samples=1 00:18:21.302 iops : min= 1100, max= 1100, avg=1100.00, stdev= 0.00, samples=1 00:18:21.302 lat (usec) : 250=15.85%, 500=59.52%, 750=24.13%, 1000=0.50% 00:18:21.302 cpu : usr=3.40%, sys=5.50%, ctx=2199, majf=0, minf=1 00:18:21.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 issued rwts: total=1024,1172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.302 job3: (groupid=0, jobs=1): err= 0: pid=3577739: Fri Jul 12 13:26:18 2024 00:18:21.302 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:21.302 slat (nsec): min=7113, max=65919, avg=28498.75, stdev=10266.91 00:18:21.302 clat (usec): min=289, max=41022, avg=588.38, stdev=2547.96 00:18:21.302 lat (usec): min=309, max=41041, avg=616.88, stdev=2547.22 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 351], 00:18:21.302 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 441], 00:18:21.302 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 537], 00:18:21.302 | 99.00th=[ 635], 99.50th=[ 766], 99.90th=[41157], 99.95th=[41157], 00:18:21.302 | 99.99th=[41157] 00:18:21.302 write: IOPS=1243, BW=4975KiB/s (5094kB/s)(4980KiB/1001msec); 0 zone resets 00:18:21.302 slat (nsec): min=6667, max=56261, avg=15216.51, stdev=7104.49 00:18:21.302 clat (usec): min=190, max=485, avg=270.18, stdev=60.30 00:18:21.302 lat (usec): min=206, max=526, avg=285.39, stdev=62.38 00:18:21.302 clat percentiles (usec): 00:18:21.302 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:18:21.302 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:18:21.302 | 70.00th=[ 269], 80.00th=[ 310], 90.00th=[ 383], 95.00th=[ 404], 00:18:21.302 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 486], 99.95th=[ 486], 00:18:21.302 | 99.99th=[ 486] 00:18:21.302 bw ( KiB/s): min= 4096, max= 4096, per=23.71%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.302 lat (usec) : 250=29.13%, 500=66.55%, 750=4.05%, 1000=0.04% 00:18:21.302 lat (msec) : 20=0.04%, 50=0.18% 00:18:21.302 cpu : usr=2.50%, sys=4.90%, ctx=2271, majf=0, minf=2 00:18:21.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.302 issued rwts: total=1024,1245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.302 00:18:21.302 Run status group 0 (all jobs): 00:18:21.302 READ: bw=12.9MiB/s (13.5MB/s), 1161KiB/s-4228KiB/s (1188kB/s-4329kB/s), io=13.3MiB (13.9MB), run=1001-1034msec 00:18:21.302 WRITE: bw=16.9MiB/s (17.7MB/s), 1981KiB/s-6138KiB/s (2028kB/s-6285kB/s), io=17.4MiB (18.3MB), run=1001-1034msec 00:18:21.302 00:18:21.302 Disk stats (read/write): 00:18:21.302 nvme0n1: ios=334/512, merge=0/0, ticks=1048/177, in_queue=1225, util=99.00% 00:18:21.302 nvme0n2: ios=1005/1024, merge=0/0, ticks=470/268, in_queue=738, util=86.27% 00:18:21.302 nvme0n3: ios=817/1024, merge=0/0, ticks=567/389, in_queue=956, util=98.22% 00:18:21.302 nvme0n4: ios=946/1024, merge=0/0, ticks=684/251, in_queue=935, util=98.31% 00:18:21.302 13:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:21.302 [global] 00:18:21.302 thread=1 00:18:21.302 invalidate=1 00:18:21.302 rw=write 00:18:21.302 time_based=1 00:18:21.302 runtime=1 00:18:21.302 ioengine=libaio 00:18:21.302 direct=1 00:18:21.302 bs=4096 00:18:21.302 iodepth=128 00:18:21.302 norandommap=0 00:18:21.302 numjobs=1 00:18:21.302 00:18:21.302 verify_dump=1 00:18:21.302 verify_backlog=512 00:18:21.302 verify_state_save=0 00:18:21.302 do_verify=1 00:18:21.302 verify=crc32c-intel 00:18:21.302 [job0] 00:18:21.302 filename=/dev/nvme0n1 00:18:21.302 [job1] 00:18:21.302 filename=/dev/nvme0n2 00:18:21.302 [job2] 00:18:21.302 filename=/dev/nvme0n3 00:18:21.302 [job3] 00:18:21.302 filename=/dev/nvme0n4 00:18:21.302 Could not set queue depth (nvme0n1) 00:18:21.302 Could not set queue depth (nvme0n2) 00:18:21.302 Could not set queue depth (nvme0n3) 00:18:21.302 Could not set queue depth (nvme0n4) 00:18:21.302 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.302 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.302 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.302 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.302 fio-3.35 00:18:21.302 Starting 4 threads 00:18:22.673 00:18:22.673 job0: (groupid=0, jobs=1): err= 0: pid=3578084: Fri Jul 12 13:26:19 2024 00:18:22.673 read: IOPS=5835, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1007msec) 00:18:22.673 slat (usec): min=2, max=9734, avg=88.64, stdev=621.45 00:18:22.673 clat (usec): min=4179, max=21005, avg=11366.89, stdev=2690.55 00:18:22.673 lat (usec): min=4186, max=21075, avg=11455.53, stdev=2728.05 00:18:22.673 clat percentiles (usec): 00:18:22.673 | 1.00th=[ 5145], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9765], 00:18:22.673 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:18:22.673 | 70.00th=[11863], 80.00th=[13435], 90.00th=[15139], 95.00th=[17433], 00:18:22.673 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20579], 99.95th=[21103], 00:18:22.673 | 99.99th=[21103] 00:18:22.673 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:18:22.673 slat (usec): min=3, max=8756, avg=71.06, stdev=425.92 00:18:22.673 clat (usec): min=1100, max=20957, avg=9928.16, stdev=2635.10 00:18:22.673 lat (usec): min=1109, max=20964, avg=9999.22, stdev=2650.71 00:18:22.673 clat percentiles (usec): 00:18:22.673 | 1.00th=[ 3687], 5.00th=[ 5473], 10.00th=[ 6390], 20.00th=[ 7111], 00:18:22.673 | 30.00th=[ 8717], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:18:22.673 | 70.00th=[11207], 80.00th=[11338], 90.00th=[13304], 95.00th=[14222], 00:18:22.673 | 99.00th=[15533], 99.50th=[15664], 99.90th=[19792], 99.95th=[20317], 00:18:22.673 | 99.99th=[20841] 00:18:22.673 bw ( KiB/s): min=24576, max=24576, per=37.09%, avg=24576.00, stdev= 0.00, samples=2 00:18:22.674 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:18:22.674 lat (msec) : 2=0.08%, 4=0.57%, 10=32.70%, 20=66.40%, 50=0.25% 00:18:22.674 cpu : usr=5.67%, sys=8.45%, ctx=603, majf=0, minf=1 00:18:22.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.674 issued rwts: total=5876,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.674 job1: (groupid=0, jobs=1): err= 0: pid=3578085: Fri Jul 12 13:26:19 2024 00:18:22.674 read: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec) 00:18:22.674 slat (usec): min=2, max=22979, avg=182.19, stdev=1319.30 00:18:22.674 clat (usec): min=6039, max=87778, avg=21003.59, stdev=10444.28 00:18:22.674 lat (usec): min=6045, max=87796, avg=21185.78, stdev=10564.71 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[11207], 5.00th=[12518], 10.00th=[14091], 20.00th=[15139], 00:18:22.674 | 30.00th=[16909], 40.00th=[17957], 50.00th=[19530], 60.00th=[20055], 00:18:22.674 | 70.00th=[21365], 80.00th=[23462], 90.00th=[25560], 95.00th=[35390], 00:18:22.674 | 99.00th=[79168], 99.50th=[80217], 99.90th=[87557], 99.95th=[87557], 00:18:22.674 | 99.99th=[87557] 00:18:22.674 write: IOPS=2414, BW=9658KiB/s (9890kB/s)(9784KiB/1013msec); 0 zone resets 00:18:22.674 slat (usec): min=3, max=18028, avg=249.54, stdev=1163.16 00:18:22.674 clat (usec): min=5008, max=87785, avg=34949.41, stdev=20716.04 00:18:22.674 lat (usec): min=5014, max=87805, avg=35198.95, stdev=20849.79 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[ 7504], 5.00th=[11338], 10.00th=[12780], 20.00th=[15664], 00:18:22.674 | 30.00th=[19792], 40.00th=[23462], 50.00th=[28181], 60.00th=[32113], 00:18:22.674 | 70.00th=[49021], 80.00th=[59507], 90.00th=[67634], 95.00th=[70779], 00:18:22.674 | 99.00th=[78119], 99.50th=[79168], 99.90th=[80217], 99.95th=[87557], 00:18:22.674 | 99.99th=[87557] 00:18:22.674 bw ( KiB/s): min= 8064, max=10480, per=13.99%, avg=9272.00, stdev=1708.37, samples=2 00:18:22.674 iops : min= 2016, max= 2620, avg=2318.00, stdev=427.09, samples=2 00:18:22.674 lat (msec) : 10=2.76%, 20=40.45%, 50=39.27%, 100=17.51% 00:18:22.674 cpu : usr=2.47%, sys=3.26%, ctx=281, majf=0, minf=1 00:18:22.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.674 issued rwts: total=2048,2446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.674 job2: (groupid=0, jobs=1): err= 0: pid=3578086: Fri Jul 12 13:26:19 2024 00:18:22.674 read: IOPS=4642, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1011msec) 00:18:22.674 slat (usec): min=3, max=13188, avg=116.43, stdev=818.01 00:18:22.674 clat (usec): min=4281, max=28289, avg=14036.98, stdev=3714.82 00:18:22.674 lat (usec): min=4288, max=28864, avg=14153.41, stdev=3768.65 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[ 4883], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11863], 00:18:22.674 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13435], 00:18:22.674 | 70.00th=[14746], 80.00th=[16581], 90.00th=[19530], 95.00th=[22152], 00:18:22.674 | 99.00th=[24249], 99.50th=[25035], 99.90th=[27395], 99.95th=[27395], 00:18:22.674 | 99.99th=[28181] 00:18:22.674 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:18:22.674 slat (usec): min=4, max=10776, avg=80.50, stdev=379.00 00:18:22.674 clat (usec): min=1172, max=27366, avg=12091.78, stdev=3123.71 00:18:22.674 lat (usec): min=1182, max=27374, avg=12172.28, stdev=3141.39 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[ 3294], 5.00th=[ 5800], 10.00th=[ 7373], 20.00th=[ 9372], 00:18:22.674 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:18:22.674 | 70.00th=[13435], 80.00th=[13566], 90.00th=[14353], 95.00th=[16909], 00:18:22.674 | 99.00th=[20317], 99.50th=[22938], 99.90th=[24511], 99.95th=[25035], 00:18:22.674 | 99.99th=[27395] 00:18:22.674 bw ( KiB/s): min=20152, max=20480, per=30.66%, avg=20316.00, stdev=231.93, samples=2 00:18:22.674 iops : min= 5038, max= 5120, avg=5079.00, stdev=57.98, samples=2 00:18:22.674 lat (msec) : 2=0.03%, 4=0.87%, 10=13.47%, 20=80.85%, 50=4.78% 00:18:22.674 cpu : usr=5.15%, sys=7.72%, ctx=620, majf=0, minf=1 00:18:22.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.674 issued rwts: total=4694,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.674 job3: (groupid=0, jobs=1): err= 0: pid=3578087: Fri Jul 12 13:26:19 2024 00:18:22.674 read: IOPS=2909, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1004msec) 00:18:22.674 slat (usec): min=2, max=22634, avg=161.90, stdev=1148.53 00:18:22.674 clat (usec): min=3127, max=55344, avg=20813.34, stdev=7631.80 00:18:22.674 lat (usec): min=7674, max=55365, avg=20975.24, stdev=7709.49 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[ 8094], 5.00th=[13435], 10.00th=[13698], 20.00th=[14746], 00:18:22.674 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18744], 60.00th=[19792], 00:18:22.674 | 70.00th=[21103], 80.00th=[24511], 90.00th=[30540], 95.00th=[40109], 00:18:22.674 | 99.00th=[43254], 99.50th=[44827], 99.90th=[47449], 99.95th=[50070], 00:18:22.674 | 99.99th=[55313] 00:18:22.674 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:18:22.674 slat (usec): min=3, max=25496, avg=164.85, stdev=1262.61 00:18:22.674 clat (usec): min=5502, max=71193, avg=21234.46, stdev=9815.50 00:18:22.674 lat (usec): min=5509, max=71212, avg=21399.32, stdev=9936.01 00:18:22.674 clat percentiles (usec): 00:18:22.674 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[14222], 20.00th=[15795], 00:18:22.674 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[18220], 00:18:22.674 | 70.00th=[19792], 80.00th=[26608], 90.00th=[40109], 95.00th=[45351], 00:18:22.674 | 99.00th=[51119], 99.50th=[51119], 99.90th=[60031], 99.95th=[70779], 00:18:22.674 | 99.99th=[70779] 00:18:22.674 bw ( KiB/s): min=12288, max=12288, per=18.54%, avg=12288.00, stdev= 0.00, samples=2 00:18:22.674 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:22.674 lat (msec) : 4=0.02%, 10=2.54%, 20=65.59%, 50=30.70%, 100=1.15% 00:18:22.674 cpu : usr=2.39%, sys=4.19%, ctx=199, majf=0, minf=1 00:18:22.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:22.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.674 issued rwts: total=2921,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.674 00:18:22.674 Run status group 0 (all jobs): 00:18:22.674 READ: bw=59.9MiB/s (62.8MB/s), 8087KiB/s-22.8MiB/s (8281kB/s-23.9MB/s), io=60.7MiB (63.6MB), run=1004-1013msec 00:18:22.674 WRITE: bw=64.7MiB/s (67.9MB/s), 9658KiB/s-23.8MiB/s (9890kB/s-25.0MB/s), io=65.6MiB (68.7MB), run=1004-1013msec 00:18:22.674 00:18:22.674 Disk stats (read/write): 00:18:22.674 nvme0n1: ios=5075/5120, merge=0/0, ticks=54924/48677, in_queue=103601, util=86.77% 00:18:22.674 nvme0n2: ios=1926/2048, merge=0/0, ticks=39619/65880, in_queue=105499, util=86.89% 00:18:22.674 nvme0n3: ios=4126/4127, merge=0/0, ticks=56379/48294, in_queue=104673, util=97.60% 00:18:22.674 nvme0n4: ios=2475/2560, merge=0/0, ticks=23455/26883, in_queue=50338, util=91.05% 00:18:22.674 13:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:22.674 [global] 00:18:22.674 thread=1 00:18:22.674 invalidate=1 00:18:22.674 rw=randwrite 00:18:22.674 time_based=1 00:18:22.674 runtime=1 00:18:22.674 ioengine=libaio 00:18:22.674 direct=1 00:18:22.674 bs=4096 00:18:22.674 iodepth=128 00:18:22.674 norandommap=0 00:18:22.674 numjobs=1 00:18:22.674 00:18:22.674 verify_dump=1 00:18:22.674 verify_backlog=512 00:18:22.674 verify_state_save=0 00:18:22.674 do_verify=1 00:18:22.674 verify=crc32c-intel 00:18:22.674 [job0] 00:18:22.674 filename=/dev/nvme0n1 00:18:22.674 [job1] 00:18:22.674 filename=/dev/nvme0n2 00:18:22.674 [job2] 00:18:22.674 filename=/dev/nvme0n3 00:18:22.674 [job3] 00:18:22.674 filename=/dev/nvme0n4 00:18:22.674 Could not set queue depth (nvme0n1) 00:18:22.674 Could not set queue depth (nvme0n2) 00:18:22.674 Could not set queue depth (nvme0n3) 00:18:22.674 Could not set queue depth (nvme0n4) 00:18:22.931 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:22.931 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:22.931 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:22.931 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:22.931 fio-3.35 00:18:22.931 Starting 4 threads 00:18:24.305 00:18:24.305 job0: (groupid=0, jobs=1): err= 0: pid=3578313: Fri Jul 12 13:26:21 2024 00:18:24.305 read: IOPS=3673, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1010msec) 00:18:24.306 slat (usec): min=2, max=17918, avg=120.64, stdev=852.59 00:18:24.306 clat (usec): min=2920, max=34722, avg=15126.09, stdev=4517.82 00:18:24.306 lat (usec): min=5923, max=34737, avg=15246.73, stdev=4579.10 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:18:24.306 | 30.00th=[11731], 40.00th=[13304], 50.00th=[14091], 60.00th=[15008], 00:18:24.306 | 70.00th=[15533], 80.00th=[17957], 90.00th=[22152], 95.00th=[25035], 00:18:24.306 | 99.00th=[29754], 99.50th=[31065], 99.90th=[32900], 99.95th=[33162], 00:18:24.306 | 99.99th=[34866] 00:18:24.306 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:18:24.306 slat (usec): min=4, max=11881, avg=128.35, stdev=679.35 00:18:24.306 clat (usec): min=3639, max=40718, avg=17534.47, stdev=7975.24 00:18:24.306 lat (usec): min=3649, max=40750, avg=17662.82, stdev=8024.25 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 5866], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[10683], 00:18:24.306 | 30.00th=[12387], 40.00th=[13960], 50.00th=[15926], 60.00th=[18482], 00:18:24.306 | 70.00th=[20841], 80.00th=[21627], 90.00th=[31065], 95.00th=[34866], 00:18:24.306 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:18:24.306 | 99.99th=[40633] 00:18:24.306 bw ( KiB/s): min=13400, max=19352, per=24.48%, avg=16376.00, stdev=4208.70, samples=2 00:18:24.306 iops : min= 3350, max= 4838, avg=4094.00, stdev=1052.17, samples=2 00:18:24.306 lat (msec) : 4=0.09%, 10=10.68%, 20=63.85%, 50=25.38% 00:18:24.306 cpu : usr=3.67%, sys=6.64%, ctx=378, majf=0, minf=5 00:18:24.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:24.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.306 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.306 job1: (groupid=0, jobs=1): err= 0: pid=3578314: Fri Jul 12 13:26:21 2024 00:18:24.306 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:18:24.306 slat (usec): min=3, max=20886, avg=164.80, stdev=1209.80 00:18:24.306 clat (usec): min=9004, max=81129, avg=20362.21, stdev=12220.75 00:18:24.306 lat (usec): min=9009, max=92681, avg=20527.00, stdev=12348.31 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11731], 20.00th=[12256], 00:18:24.306 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14484], 60.00th=[15270], 00:18:24.306 | 70.00th=[18220], 80.00th=[33817], 90.00th=[40109], 95.00th=[41681], 00:18:24.306 | 99.00th=[64226], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:18:24.306 | 99.99th=[81265] 00:18:24.306 write: IOPS=2818, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1010msec); 0 zone resets 00:18:24.306 slat (usec): min=4, max=20125, avg=196.09, stdev=1172.26 00:18:24.306 clat (usec): min=1823, max=95431, avg=26623.86, stdev=15264.24 00:18:24.306 lat (usec): min=1839, max=96492, avg=26819.95, stdev=15366.93 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 6587], 5.00th=[10028], 10.00th=[11731], 20.00th=[14615], 00:18:24.306 | 30.00th=[18744], 40.00th=[20841], 50.00th=[21365], 60.00th=[25297], 00:18:24.306 | 70.00th=[30016], 80.00th=[36439], 90.00th=[44827], 95.00th=[55837], 00:18:24.306 | 99.00th=[89654], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:18:24.306 | 99.99th=[95945] 00:18:24.306 bw ( KiB/s): min=10608, max=11144, per=16.26%, avg=10876.00, stdev=379.01, samples=2 00:18:24.306 iops : min= 2652, max= 2786, avg=2719.00, stdev=94.75, samples=2 00:18:24.306 lat (msec) : 2=0.04%, 10=1.66%, 20=50.42%, 50=42.78%, 100=5.10% 00:18:24.306 cpu : usr=2.97%, sys=4.86%, ctx=287, majf=0, minf=13 00:18:24.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.306 issued rwts: total=2560,2847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.306 job2: (groupid=0, jobs=1): err= 0: pid=3578315: Fri Jul 12 13:26:21 2024 00:18:24.306 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:18:24.306 slat (usec): min=3, max=9145, avg=101.90, stdev=546.44 00:18:24.306 clat (usec): min=8732, max=24184, avg=12944.20, stdev=2259.16 00:18:24.306 lat (usec): min=8946, max=27675, avg=13046.10, stdev=2283.03 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:18:24.306 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:18:24.306 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14353], 95.00th=[17695], 00:18:24.306 | 99.00th=[23462], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:18:24.306 | 99.99th=[24249] 00:18:24.306 write: IOPS=4958, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1004msec); 0 zone resets 00:18:24.306 slat (usec): min=3, max=11761, avg=98.01, stdev=482.61 00:18:24.306 clat (usec): min=3293, max=32674, avg=13482.55, stdev=3748.98 00:18:24.306 lat (usec): min=3966, max=39876, avg=13580.56, stdev=3778.83 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[11076], 20.00th=[11863], 00:18:24.306 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:18:24.306 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16909], 95.00th=[22676], 00:18:24.306 | 99.00th=[29230], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:18:24.306 | 99.99th=[32637] 00:18:24.306 bw ( KiB/s): min=18256, max=20552, per=29.01%, avg=19404.00, stdev=1623.52, samples=2 00:18:24.306 iops : min= 4564, max= 5138, avg=4851.00, stdev=405.88, samples=2 00:18:24.306 lat (msec) : 4=0.05%, 10=3.46%, 20=91.45%, 50=5.04% 00:18:24.306 cpu : usr=4.79%, sys=9.67%, ctx=593, majf=0, minf=11 00:18:24.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:24.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.306 issued rwts: total=4608,4978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.306 job3: (groupid=0, jobs=1): err= 0: pid=3578316: Fri Jul 12 13:26:21 2024 00:18:24.306 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:18:24.306 slat (usec): min=2, max=8679, avg=103.11, stdev=520.47 00:18:24.306 clat (usec): min=9956, max=25265, avg=13462.64, stdev=1629.03 00:18:24.306 lat (usec): min=10129, max=25293, avg=13565.75, stdev=1608.97 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11863], 20.00th=[12518], 00:18:24.306 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:18:24.306 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14746], 95.00th=[16450], 00:18:24.306 | 99.00th=[20579], 99.50th=[22152], 99.90th=[24773], 99.95th=[24773], 00:18:24.306 | 99.99th=[25297] 00:18:24.306 write: IOPS=4950, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1004msec); 0 zone resets 00:18:24.306 slat (usec): min=3, max=7767, avg=97.50, stdev=462.69 00:18:24.306 clat (usec): min=489, max=24641, avg=13015.30, stdev=2731.19 00:18:24.306 lat (usec): min=3450, max=24645, avg=13112.80, stdev=2715.41 00:18:24.306 clat percentiles (usec): 00:18:24.306 | 1.00th=[ 6718], 5.00th=[10159], 10.00th=[10683], 20.00th=[12125], 00:18:24.306 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:18:24.306 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14615], 95.00th=[20317], 00:18:24.306 | 99.00th=[22938], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:18:24.306 | 99.99th=[24511] 00:18:24.306 bw ( KiB/s): min=18984, max=19752, per=28.95%, avg=19368.00, stdev=543.06, samples=2 00:18:24.306 iops : min= 4746, max= 4938, avg=4842.00, stdev=135.76, samples=2 00:18:24.306 lat (usec) : 500=0.01% 00:18:24.306 lat (msec) : 4=0.33%, 10=1.52%, 20=94.84%, 50=3.29% 00:18:24.306 cpu : usr=5.68%, sys=7.88%, ctx=513, majf=0, minf=21 00:18:24.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:24.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.306 issued rwts: total=4608,4970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.306 00:18:24.306 Run status group 0 (all jobs): 00:18:24.306 READ: bw=59.9MiB/s (62.8MB/s), 9.90MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=60.5MiB (63.4MB), run=1004-1010msec 00:18:24.306 WRITE: bw=65.3MiB/s (68.5MB/s), 11.0MiB/s-19.4MiB/s (11.5MB/s-20.3MB/s), io=66.0MiB (69.2MB), run=1004-1010msec 00:18:24.306 00:18:24.306 Disk stats (read/write): 00:18:24.306 nvme0n1: ios=3283/3584, merge=0/0, ticks=48936/56752, in_queue=105688, util=85.77% 00:18:24.306 nvme0n2: ios=2098/2375, merge=0/0, ticks=21777/30336, in_queue=52113, util=91.37% 00:18:24.306 nvme0n3: ios=3969/4096, merge=0/0, ticks=17116/17090, in_queue=34206, util=95.42% 00:18:24.306 nvme0n4: ios=3996/4096, merge=0/0, ticks=14479/13200, in_queue=27679, util=96.12% 00:18:24.306 13:26:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:24.306 13:26:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3578454 00:18:24.306 13:26:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:24.306 13:26:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:24.306 [global] 00:18:24.306 thread=1 00:18:24.306 invalidate=1 00:18:24.306 rw=read 00:18:24.306 time_based=1 00:18:24.306 runtime=10 00:18:24.306 ioengine=libaio 00:18:24.306 direct=1 00:18:24.306 bs=4096 00:18:24.306 iodepth=1 00:18:24.306 norandommap=1 00:18:24.306 numjobs=1 00:18:24.306 00:18:24.306 [job0] 00:18:24.306 filename=/dev/nvme0n1 00:18:24.306 [job1] 00:18:24.306 filename=/dev/nvme0n2 00:18:24.306 [job2] 00:18:24.306 filename=/dev/nvme0n3 00:18:24.306 [job3] 00:18:24.306 filename=/dev/nvme0n4 00:18:24.306 Could not set queue depth (nvme0n1) 00:18:24.306 Could not set queue depth (nvme0n2) 00:18:24.306 Could not set queue depth (nvme0n3) 00:18:24.306 Could not set queue depth (nvme0n4) 00:18:24.306 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.306 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.306 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.306 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.306 fio-3.35 00:18:24.306 Starting 4 threads 00:18:27.581 13:26:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:27.581 13:26:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:27.581 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=26079232, buflen=4096 00:18:27.581 fio: pid=3578545, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:27.581 13:26:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.581 13:26:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:27.581 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27725824, buflen=4096 00:18:27.581 fio: pid=3578544, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:27.838 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8278016, buflen=4096 00:18:27.838 fio: pid=3578542, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:27.838 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:27.838 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:28.104 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6766592, buflen=4096 00:18:28.104 fio: pid=3578543, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:28.104 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.104 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:28.104 00:18:28.104 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3578542: Fri Jul 12 13:26:25 2024 00:18:28.104 read: IOPS=588, BW=2353KiB/s (2409kB/s)(8084KiB/3436msec) 00:18:28.104 slat (usec): min=5, max=33647, avg=36.30, stdev=800.33 00:18:28.104 clat (usec): min=248, max=41471, avg=1649.95, stdev=7036.32 00:18:28.104 lat (usec): min=254, max=41503, avg=1686.26, stdev=7079.11 00:18:28.104 clat percentiles (usec): 00:18:28.104 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 293], 00:18:28.104 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 379], 00:18:28.104 | 70.00th=[ 465], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 685], 00:18:28.104 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:18:28.104 | 99.99th=[41681] 00:18:28.104 bw ( KiB/s): min= 104, max= 6952, per=6.93%, avg=1258.67, stdev=2789.20, samples=6 00:18:28.104 iops : min= 26, max= 1738, avg=314.67, stdev=697.30, samples=6 00:18:28.104 lat (usec) : 250=0.15%, 500=74.53%, 750=21.17%, 1000=0.89% 00:18:28.104 lat (msec) : 2=0.05%, 4=0.05%, 50=3.12% 00:18:28.104 cpu : usr=0.20%, sys=0.93%, ctx=2028, majf=0, minf=1 00:18:28.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.104 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3578543: Fri Jul 12 13:26:25 2024 00:18:28.104 read: IOPS=446, BW=1783KiB/s (1826kB/s)(6608KiB/3706msec) 00:18:28.104 slat (usec): min=5, max=14412, avg=33.73, stdev=470.55 00:18:28.104 clat (usec): min=277, max=44945, avg=2192.22, stdev=8186.77 00:18:28.104 lat (usec): min=286, max=48692, avg=2220.60, stdev=8232.27 00:18:28.104 clat percentiles (usec): 00:18:28.104 | 1.00th=[ 306], 5.00th=[ 355], 10.00th=[ 371], 20.00th=[ 400], 00:18:28.104 | 30.00th=[ 429], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 478], 00:18:28.104 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 586], 95.00th=[ 725], 00:18:28.104 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[44827], 00:18:28.104 | 99.99th=[44827] 00:18:28.104 bw ( KiB/s): min= 104, max= 7176, per=10.33%, avg=1874.71, stdev=2568.49, samples=7 00:18:28.104 iops : min= 26, max= 1794, avg=468.57, stdev=642.20, samples=7 00:18:28.104 lat (usec) : 500=71.93%, 750=23.23%, 1000=0.36% 00:18:28.104 lat (msec) : 2=0.06%, 10=0.06%, 20=0.06%, 50=4.23% 00:18:28.104 cpu : usr=0.46%, sys=0.73%, ctx=1658, majf=0, minf=1 00:18:28.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.104 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3578544: Fri Jul 12 13:26:25 2024 00:18:28.104 read: IOPS=2129, BW=8517KiB/s (8722kB/s)(26.4MiB/3179msec) 00:18:28.104 slat (nsec): min=4566, max=62884, avg=12207.95, stdev=8427.44 00:18:28.104 clat (usec): min=271, max=41242, avg=450.88, stdev=1076.65 00:18:28.104 lat (usec): min=277, max=41259, avg=463.08, stdev=1076.82 00:18:28.104 clat percentiles (usec): 00:18:28.104 | 1.00th=[ 302], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 388], 00:18:28.104 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 420], 00:18:28.104 | 70.00th=[ 433], 80.00th=[ 453], 90.00th=[ 506], 95.00th=[ 545], 00:18:28.104 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 1074], 99.95th=[41157], 00:18:28.104 | 99.99th=[41157] 00:18:28.104 bw ( KiB/s): min= 5104, max= 9664, per=47.31%, avg=8584.00, stdev=1731.98, samples=6 00:18:28.104 iops : min= 1276, max= 2416, avg=2146.00, stdev=433.00, samples=6 00:18:28.104 lat (usec) : 500=88.88%, 750=10.90%, 1000=0.10% 00:18:28.104 lat (msec) : 2=0.01%, 4=0.01%, 50=0.07% 00:18:28.104 cpu : usr=1.42%, sys=3.78%, ctx=6770, majf=0, minf=1 00:18:28.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.104 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3578545: Fri Jul 12 13:26:25 2024 00:18:28.104 read: IOPS=2182, BW=8728KiB/s (8937kB/s)(24.9MiB/2918msec) 00:18:28.104 slat (nsec): min=5403, max=74064, avg=12398.89, stdev=6766.50 00:18:28.104 clat (usec): min=292, max=41191, avg=439.06, stdev=1025.43 00:18:28.104 lat (usec): min=300, max=41197, avg=451.46, stdev=1025.53 00:18:28.104 clat percentiles (usec): 00:18:28.104 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:18:28.104 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 408], 00:18:28.104 | 70.00th=[ 453], 80.00th=[ 498], 90.00th=[ 562], 95.00th=[ 611], 00:18:28.104 | 99.00th=[ 725], 99.50th=[ 791], 99.90th=[ 1254], 99.95th=[40633], 00:18:28.104 | 99.99th=[41157] 00:18:28.104 bw ( KiB/s): min= 6072, max=11584, per=50.25%, avg=9116.80, stdev=1998.26, samples=5 00:18:28.104 iops : min= 1518, max= 2896, avg=2279.20, stdev=499.57, samples=5 00:18:28.104 lat (usec) : 500=80.18%, 750=19.03%, 1000=0.64% 00:18:28.104 lat (msec) : 2=0.03%, 10=0.03%, 50=0.06% 00:18:28.104 cpu : usr=1.41%, sys=4.59%, ctx=6368, majf=0, minf=1 00:18:28.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.104 issued rwts: total=6368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:28.104 00:18:28.104 Run status group 0 (all jobs): 00:18:28.104 READ: bw=17.7MiB/s (18.6MB/s), 1783KiB/s-8728KiB/s (1826kB/s-8937kB/s), io=65.7MiB (68.8MB), run=2918-3706msec 00:18:28.104 00:18:28.104 Disk stats (read/write): 00:18:28.104 nvme0n1: ios=1821/0, merge=0/0, ticks=3261/0, in_queue=3261, util=94.48% 00:18:28.104 nvme0n2: ios=1650/0, merge=0/0, ticks=3534/0, in_queue=3534, util=95.93% 00:18:28.104 nvme0n3: ios=6662/0, merge=0/0, ticks=2931/0, in_queue=2931, util=96.79% 00:18:28.104 nvme0n4: ios=6296/0, merge=0/0, ticks=2681/0, in_queue=2681, util=96.75% 00:18:28.366 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.366 13:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:28.623 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.623 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:28.880 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:28.880 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:29.138 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:29.138 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:29.395 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:29.395 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3578454 00:18:29.395 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:29.395 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:29.683 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:29.684 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:29.684 nvmf hotplug test: fio failed as expected 00:18:29.684 13:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.940 rmmod nvme_tcp 00:18:29.940 rmmod nvme_fabrics 00:18:29.940 rmmod nvme_keyring 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3576535 ']' 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3576535 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3576535 ']' 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3576535 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3576535 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3576535' 00:18:29.940 killing process with pid 3576535 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3576535 00:18:29.940 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3576535 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.199 13:26:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.733 13:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.733 00:18:32.733 real 0m23.291s 00:18:32.733 user 1m18.858s 00:18:32.733 sys 0m7.812s 00:18:32.733 13:26:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.733 13:26:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.733 ************************************ 00:18:32.733 END TEST nvmf_fio_target 00:18:32.733 ************************************ 00:18:32.733 13:26:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:32.733 13:26:29 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:32.733 13:26:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:32.733 13:26:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.733 13:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:32.733 ************************************ 00:18:32.733 START TEST nvmf_bdevio 00:18:32.733 ************************************ 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:32.733 * Looking for test storage... 00:18:32.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.733 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.734 13:26:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:34.634 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:34.634 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:34.634 Found net devices under 0000:09:00.0: cvl_0_0 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:34.634 Found net devices under 0000:09:00.1: cvl_0_1 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:34.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:18:34.634 00:18:34.634 --- 10.0.0.2 ping statistics --- 00:18:34.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.634 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:18:34.634 00:18:34.634 --- 10.0.0.1 ping statistics --- 00:18:34.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.634 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3581203 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3581203 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3581203 ']' 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.634 13:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.634 [2024-07-12 13:26:31.931739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:18:34.634 [2024-07-12 13:26:31.931814] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.635 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.635 [2024-07-12 13:26:31.972125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:34.635 [2024-07-12 13:26:31.999867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.635 [2024-07-12 13:26:32.085339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.635 [2024-07-12 13:26:32.085404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.635 [2024-07-12 13:26:32.085419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.635 [2024-07-12 13:26:32.085437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.635 [2024-07-12 13:26:32.085463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.635 [2024-07-12 13:26:32.085559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:34.635 [2024-07-12 13:26:32.085690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:34.635 [2024-07-12 13:26:32.085718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:34.635 [2024-07-12 13:26:32.085721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 [2024-07-12 13:26:32.236063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 Malloc0 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:34.892 [2024-07-12 13:26:32.288241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.892 13:26:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.893 { 00:18:34.893 "params": { 00:18:34.893 "name": "Nvme$subsystem", 00:18:34.893 "trtype": "$TEST_TRANSPORT", 00:18:34.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.893 "adrfam": "ipv4", 00:18:34.893 "trsvcid": "$NVMF_PORT", 00:18:34.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.893 "hdgst": ${hdgst:-false}, 00:18:34.893 "ddgst": ${ddgst:-false} 00:18:34.893 }, 00:18:34.893 "method": "bdev_nvme_attach_controller" 00:18:34.893 } 00:18:34.893 EOF 00:18:34.893 )") 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:34.893 13:26:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.893 "params": { 00:18:34.893 "name": "Nvme1", 00:18:34.893 "trtype": "tcp", 00:18:34.893 "traddr": "10.0.0.2", 00:18:34.893 "adrfam": "ipv4", 00:18:34.893 "trsvcid": "4420", 00:18:34.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.893 "hdgst": false, 00:18:34.893 "ddgst": false 00:18:34.893 }, 00:18:34.893 "method": "bdev_nvme_attach_controller" 00:18:34.893 }' 00:18:34.893 [2024-07-12 13:26:32.334895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:18:34.893 [2024-07-12 13:26:32.334972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581294 ] 00:18:34.893 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.150 [2024-07-12 13:26:32.368426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:35.150 [2024-07-12 13:26:32.397643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.150 [2024-07-12 13:26:32.488812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.150 [2024-07-12 13:26:32.488860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.150 [2024-07-12 13:26:32.488863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.408 I/O targets: 00:18:35.408 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:35.408 00:18:35.408 00:18:35.408 CUnit - A unit testing framework for C - Version 2.1-3 00:18:35.408 http://cunit.sourceforge.net/ 00:18:35.408 00:18:35.408 00:18:35.408 Suite: bdevio tests on: Nvme1n1 00:18:35.408 Test: blockdev write read block ...passed 00:18:35.408 Test: blockdev write zeroes read block ...passed 00:18:35.408 Test: blockdev write zeroes read no split ...passed 00:18:35.408 Test: blockdev write zeroes read split ...passed 00:18:35.408 Test: blockdev write zeroes read split partial ...passed 00:18:35.408 Test: blockdev reset ...[2024-07-12 13:26:32.875209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.408 [2024-07-12 13:26:32.875327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783940 (9): Bad file descriptor 00:18:35.666 [2024-07-12 13:26:32.970380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:35.666 passed 00:18:35.666 Test: blockdev write read 8 blocks ...passed 00:18:35.666 Test: blockdev write read size > 128k ...passed 00:18:35.666 Test: blockdev write read invalid size ...passed 00:18:35.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:35.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:35.666 Test: blockdev write read max offset ...passed 00:18:35.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:35.923 Test: blockdev writev readv 8 blocks ...passed 00:18:35.923 Test: blockdev writev readv 30 x 1block ...passed 00:18:35.923 Test: blockdev writev readv block ...passed 00:18:35.923 Test: blockdev writev readv size > 128k ...passed 00:18:35.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:35.923 Test: blockdev comparev and writev ...[2024-07-12 13:26:33.185627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.923 [2024-07-12 13:26:33.185662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.923 [2024-07-12 13:26:33.185687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.923 [2024-07-12 13:26:33.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:35.923 [2024-07-12 13:26:33.186115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.923 [2024-07-12 13:26:33.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:35.923 [2024-07-12 13:26:33.186170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.923 [2024-07-12 13:26:33.186185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:35.923 [2024-07-12 13:26:33.186584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.924 [2024-07-12 13:26:33.186608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.186630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.924 [2024-07-12 13:26:33.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.924 [2024-07-12 13:26:33.187059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.187080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.924 [2024-07-12 13:26:33.187095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:35.924 passed 00:18:35.924 Test: blockdev nvme passthru rw ...passed 00:18:35.924 Test: blockdev nvme passthru vendor specific ...[2024-07-12 13:26:33.269683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.924 [2024-07-12 13:26:33.269711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.269893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.924 [2024-07-12 13:26:33.269916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.270091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.924 [2024-07-12 13:26:33.270114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:35.924 [2024-07-12 13:26:33.270292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.924 [2024-07-12 13:26:33.270323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:35.924 passed 00:18:35.924 Test: blockdev nvme admin passthru ...passed 00:18:35.924 Test: blockdev copy ...passed 00:18:35.924 00:18:35.924 Run Summary: Type Total Ran Passed Failed Inactive 00:18:35.924 suites 1 1 n/a 0 0 00:18:35.924 tests 23 23 23 0 0 00:18:35.924 asserts 152 152 152 0 n/a 00:18:35.924 00:18:35.924 Elapsed time = 1.230 seconds 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.181 rmmod nvme_tcp 00:18:36.181 rmmod nvme_fabrics 00:18:36.181 rmmod nvme_keyring 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3581203 ']' 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3581203 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3581203 ']' 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3581203 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3581203 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3581203' 00:18:36.181 killing process with pid 3581203 00:18:36.181 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3581203 00:18:36.182 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3581203 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.439 13:26:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.970 13:26:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.970 00:18:38.970 real 0m6.227s 00:18:38.970 user 0m9.808s 00:18:38.970 sys 0m2.120s 00:18:38.970 13:26:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.970 13:26:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:38.970 ************************************ 00:18:38.970 END TEST nvmf_bdevio 00:18:38.970 ************************************ 00:18:38.970 13:26:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:38.970 13:26:35 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.970 13:26:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.970 13:26:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.970 13:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:38.970 ************************************ 00:18:38.970 START TEST nvmf_auth_target 00:18:38.970 ************************************ 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.970 * Looking for test storage... 00:18:38.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.970 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.971 13:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.971 13:26:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.971 13:26:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.971 13:26:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.971 13:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:40.874 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:40.874 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:40.874 Found net devices under 0000:09:00.0: cvl_0_0 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:40.874 Found net devices under 0000:09:00.1: cvl_0_1 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.874 13:26:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:40.874 00:18:40.874 --- 10.0.0.2 ping statistics --- 00:18:40.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.874 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:18:40.874 00:18:40.874 --- 10.0.0.1 ping statistics --- 00:18:40.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.874 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3583418 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3583418 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3583418 ']' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.874 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3583441 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0fd38e9bc43e829fdc6cf6ae103d33461bc46ecc76bc604 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BC3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0fd38e9bc43e829fdc6cf6ae103d33461bc46ecc76bc604 0 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0fd38e9bc43e829fdc6cf6ae103d33461bc46ecc76bc604 0 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0fd38e9bc43e829fdc6cf6ae103d33461bc46ecc76bc604 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BC3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BC3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.BC3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=af95db6569e8fbdf546e6e7c07dcb8155683ba06fbbc455cb46f8078ba09ce6d 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0V5 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key af95db6569e8fbdf546e6e7c07dcb8155683ba06fbbc455cb46f8078ba09ce6d 3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 af95db6569e8fbdf546e6e7c07dcb8155683ba06fbbc455cb46f8078ba09ce6d 3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=af95db6569e8fbdf546e6e7c07dcb8155683ba06fbbc455cb46f8078ba09ce6d 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0V5 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0V5 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.0V5 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.133 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3b080bc589f38000334862dc712f1670 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BIz 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3b080bc589f38000334862dc712f1670 1 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3b080bc589f38000334862dc712f1670 1 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3b080bc589f38000334862dc712f1670 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BIz 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BIz 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.BIz 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a27b3c08049a43022b0386a9c13a3c24644cdf91fd694896 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.N6b 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a27b3c08049a43022b0386a9c13a3c24644cdf91fd694896 2 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a27b3c08049a43022b0386a9c13a3c24644cdf91fd694896 2 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a27b3c08049a43022b0386a9c13a3c24644cdf91fd694896 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:41.134 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.N6b 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.N6b 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.N6b 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=684bd4fe67502bd5aae43b38746f9f2c8efab8cbdada5efe 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AdW 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 684bd4fe67502bd5aae43b38746f9f2c8efab8cbdada5efe 2 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 684bd4fe67502bd5aae43b38746f9f2c8efab8cbdada5efe 2 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=684bd4fe67502bd5aae43b38746f9f2c8efab8cbdada5efe 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AdW 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AdW 00:18:41.394 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.AdW 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ee8592c1c7265304c15612c9c8ffe7ad 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ezu 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ee8592c1c7265304c15612c9c8ffe7ad 1 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ee8592c1c7265304c15612c9c8ffe7ad 1 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ee8592c1c7265304c15612c9c8ffe7ad 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ezu 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ezu 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Ezu 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7c4b6c9410ccfc2ae58e2b016a4243a62fe775f246a61b286bafdce181f14f97 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Pv 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7c4b6c9410ccfc2ae58e2b016a4243a62fe775f246a61b286bafdce181f14f97 3 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7c4b6c9410ccfc2ae58e2b016a4243a62fe775f246a61b286bafdce181f14f97 3 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7c4b6c9410ccfc2ae58e2b016a4243a62fe775f246a61b286bafdce181f14f97 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Pv 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Pv 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.2Pv 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3583418 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3583418 ']' 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.395 13:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3583441 /var/tmp/host.sock 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3583441 ']' 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:41.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.652 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BC3 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BC3 00:18:41.909 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BC3 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.0V5 ]] 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0V5 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0V5 00:18:42.166 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0V5 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BIz 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BIz 00:18:42.422 13:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BIz 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.N6b ]] 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N6b 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N6b 00:18:42.683 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N6b 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AdW 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AdW 00:18:42.941 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AdW 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Ezu ]] 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezu 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezu 00:18:43.198 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezu 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Pv 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2Pv 00:18:43.455 13:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2Pv 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.711 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.969 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.227 00:18:44.227 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.227 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.227 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.485 { 00:18:44.485 "cntlid": 1, 00:18:44.485 "qid": 0, 00:18:44.485 "state": "enabled", 00:18:44.485 "thread": "nvmf_tgt_poll_group_000", 00:18:44.485 "listen_address": { 00:18:44.485 "trtype": "TCP", 00:18:44.485 "adrfam": "IPv4", 00:18:44.485 "traddr": "10.0.0.2", 00:18:44.485 "trsvcid": "4420" 00:18:44.485 }, 00:18:44.485 "peer_address": { 00:18:44.485 "trtype": "TCP", 00:18:44.485 "adrfam": "IPv4", 00:18:44.485 "traddr": "10.0.0.1", 00:18:44.485 "trsvcid": "46562" 00:18:44.485 }, 00:18:44.485 "auth": { 00:18:44.485 "state": "completed", 00:18:44.485 "digest": "sha256", 00:18:44.485 "dhgroup": "null" 00:18:44.485 } 00:18:44.485 } 00:18:44.485 ]' 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.485 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.742 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:44.742 13:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.742 13:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.742 13:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.742 13:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.002 13:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.976 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.233 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.490 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.747 13:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.747 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.747 { 00:18:46.747 "cntlid": 3, 00:18:46.747 "qid": 0, 00:18:46.747 "state": "enabled", 00:18:46.747 "thread": "nvmf_tgt_poll_group_000", 00:18:46.747 "listen_address": { 00:18:46.747 "trtype": "TCP", 00:18:46.747 "adrfam": "IPv4", 00:18:46.747 "traddr": "10.0.0.2", 00:18:46.747 "trsvcid": "4420" 00:18:46.747 }, 00:18:46.747 "peer_address": { 00:18:46.747 "trtype": "TCP", 00:18:46.747 "adrfam": "IPv4", 00:18:46.747 "traddr": "10.0.0.1", 00:18:46.747 "trsvcid": "46592" 00:18:46.747 }, 00:18:46.747 "auth": { 00:18:46.747 "state": "completed", 00:18:46.747 "digest": "sha256", 00:18:46.747 "dhgroup": "null" 00:18:46.747 } 00:18:46.747 } 00:18:46.747 ]' 00:18:46.747 13:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.747 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.004 13:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.935 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.192 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:48.192 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.192 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.192 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.192 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.193 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.449 00:18:48.449 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.449 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.449 13:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.706 { 00:18:48.706 "cntlid": 5, 00:18:48.706 "qid": 0, 00:18:48.706 "state": "enabled", 00:18:48.706 "thread": "nvmf_tgt_poll_group_000", 00:18:48.706 "listen_address": { 00:18:48.706 "trtype": "TCP", 00:18:48.706 "adrfam": "IPv4", 00:18:48.706 "traddr": "10.0.0.2", 00:18:48.706 "trsvcid": "4420" 00:18:48.706 }, 00:18:48.706 "peer_address": { 00:18:48.706 "trtype": "TCP", 00:18:48.706 "adrfam": "IPv4", 00:18:48.706 "traddr": "10.0.0.1", 00:18:48.706 "trsvcid": "46618" 00:18:48.706 }, 00:18:48.706 "auth": { 00:18:48.706 "state": "completed", 00:18:48.706 "digest": "sha256", 00:18:48.706 "dhgroup": "null" 00:18:48.706 } 00:18:48.706 } 00:18:48.706 ]' 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.706 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.964 13:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.895 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.153 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.410 00:18:50.410 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.410 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.410 13:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.667 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.667 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.668 { 00:18:50.668 "cntlid": 7, 00:18:50.668 "qid": 0, 00:18:50.668 "state": "enabled", 00:18:50.668 "thread": "nvmf_tgt_poll_group_000", 00:18:50.668 "listen_address": { 00:18:50.668 "trtype": "TCP", 00:18:50.668 "adrfam": "IPv4", 00:18:50.668 "traddr": "10.0.0.2", 00:18:50.668 "trsvcid": "4420" 00:18:50.668 }, 00:18:50.668 "peer_address": { 00:18:50.668 "trtype": "TCP", 00:18:50.668 "adrfam": "IPv4", 00:18:50.668 "traddr": "10.0.0.1", 00:18:50.668 "trsvcid": "37402" 00:18:50.668 }, 00:18:50.668 "auth": { 00:18:50.668 "state": "completed", 00:18:50.668 "digest": "sha256", 00:18:50.668 "dhgroup": "null" 00:18:50.668 } 00:18:50.668 } 00:18:50.668 ]' 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.668 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.925 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.925 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.925 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.925 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.925 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.182 13:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.113 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.370 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.628 00:18:52.628 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.628 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.628 13:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.885 { 00:18:52.885 "cntlid": 9, 00:18:52.885 "qid": 0, 00:18:52.885 "state": "enabled", 00:18:52.885 "thread": "nvmf_tgt_poll_group_000", 00:18:52.885 "listen_address": { 00:18:52.885 "trtype": "TCP", 00:18:52.885 "adrfam": "IPv4", 00:18:52.885 "traddr": "10.0.0.2", 00:18:52.885 "trsvcid": "4420" 00:18:52.885 }, 00:18:52.885 "peer_address": { 00:18:52.885 "trtype": "TCP", 00:18:52.885 "adrfam": "IPv4", 00:18:52.885 "traddr": "10.0.0.1", 00:18:52.885 "trsvcid": "37444" 00:18:52.885 }, 00:18:52.885 "auth": { 00:18:52.885 "state": "completed", 00:18:52.885 "digest": "sha256", 00:18:52.885 "dhgroup": "ffdhe2048" 00:18:52.885 } 00:18:52.885 } 00:18:52.885 ]' 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.885 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.142 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.142 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.142 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.400 13:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.333 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.591 13:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.849 00:18:54.849 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.849 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.849 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.107 { 00:18:55.107 "cntlid": 11, 00:18:55.107 "qid": 0, 00:18:55.107 "state": "enabled", 00:18:55.107 "thread": "nvmf_tgt_poll_group_000", 00:18:55.107 "listen_address": { 00:18:55.107 "trtype": "TCP", 00:18:55.107 "adrfam": "IPv4", 00:18:55.107 "traddr": "10.0.0.2", 00:18:55.107 "trsvcid": "4420" 00:18:55.107 }, 00:18:55.107 "peer_address": { 00:18:55.107 "trtype": "TCP", 00:18:55.107 "adrfam": "IPv4", 00:18:55.107 "traddr": "10.0.0.1", 00:18:55.107 "trsvcid": "37472" 00:18:55.107 }, 00:18:55.107 "auth": { 00:18:55.107 "state": "completed", 00:18:55.107 "digest": "sha256", 00:18:55.107 "dhgroup": "ffdhe2048" 00:18:55.107 } 00:18:55.107 } 00:18:55.107 ]' 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.107 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.364 13:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.292 13:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.548 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.549 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.113 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.113 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.370 { 00:18:57.370 "cntlid": 13, 00:18:57.370 "qid": 0, 00:18:57.370 "state": "enabled", 00:18:57.370 "thread": "nvmf_tgt_poll_group_000", 00:18:57.370 "listen_address": { 00:18:57.370 "trtype": "TCP", 00:18:57.370 "adrfam": "IPv4", 00:18:57.370 "traddr": "10.0.0.2", 00:18:57.370 "trsvcid": "4420" 00:18:57.370 }, 00:18:57.370 "peer_address": { 00:18:57.370 "trtype": "TCP", 00:18:57.370 "adrfam": "IPv4", 00:18:57.370 "traddr": "10.0.0.1", 00:18:57.370 "trsvcid": "37486" 00:18:57.370 }, 00:18:57.370 "auth": { 00:18:57.370 "state": "completed", 00:18:57.370 "digest": "sha256", 00:18:57.370 "dhgroup": "ffdhe2048" 00:18:57.370 } 00:18:57.370 } 00:18:57.370 ]' 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.370 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.627 13:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.561 13:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.855 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.113 00:18:59.113 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.113 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.113 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.370 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.370 { 00:18:59.370 "cntlid": 15, 00:18:59.370 "qid": 0, 00:18:59.370 "state": "enabled", 00:18:59.371 "thread": "nvmf_tgt_poll_group_000", 00:18:59.371 "listen_address": { 00:18:59.371 "trtype": "TCP", 00:18:59.371 "adrfam": "IPv4", 00:18:59.371 "traddr": "10.0.0.2", 00:18:59.371 "trsvcid": "4420" 00:18:59.371 }, 00:18:59.371 "peer_address": { 00:18:59.371 "trtype": "TCP", 00:18:59.371 "adrfam": "IPv4", 00:18:59.371 "traddr": "10.0.0.1", 00:18:59.371 "trsvcid": "42102" 00:18:59.371 }, 00:18:59.371 "auth": { 00:18:59.371 "state": "completed", 00:18:59.371 "digest": "sha256", 00:18:59.371 "dhgroup": "ffdhe2048" 00:18:59.371 } 00:18:59.371 } 00:18:59.371 ]' 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.371 13:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.628 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.561 13:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.818 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.075 00:19:01.075 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.075 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.075 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.332 { 00:19:01.332 "cntlid": 17, 00:19:01.332 "qid": 0, 00:19:01.332 "state": "enabled", 00:19:01.332 "thread": "nvmf_tgt_poll_group_000", 00:19:01.332 "listen_address": { 00:19:01.332 "trtype": "TCP", 00:19:01.332 "adrfam": "IPv4", 00:19:01.332 "traddr": "10.0.0.2", 00:19:01.332 "trsvcid": "4420" 00:19:01.332 }, 00:19:01.332 "peer_address": { 00:19:01.332 "trtype": "TCP", 00:19:01.332 "adrfam": "IPv4", 00:19:01.332 "traddr": "10.0.0.1", 00:19:01.332 "trsvcid": "42124" 00:19:01.332 }, 00:19:01.332 "auth": { 00:19:01.332 "state": "completed", 00:19:01.332 "digest": "sha256", 00:19:01.332 "dhgroup": "ffdhe3072" 00:19:01.332 } 00:19:01.332 } 00:19:01.332 ]' 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.332 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.590 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.590 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.590 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.590 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.590 13:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.847 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:02.778 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.778 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.778 13:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.778 13:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.778 13:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.779 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.779 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.779 13:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.036 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.294 00:19:03.294 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.294 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.294 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.552 { 00:19:03.552 "cntlid": 19, 00:19:03.552 "qid": 0, 00:19:03.552 "state": "enabled", 00:19:03.552 "thread": "nvmf_tgt_poll_group_000", 00:19:03.552 "listen_address": { 00:19:03.552 "trtype": "TCP", 00:19:03.552 "adrfam": "IPv4", 00:19:03.552 "traddr": "10.0.0.2", 00:19:03.552 "trsvcid": "4420" 00:19:03.552 }, 00:19:03.552 "peer_address": { 00:19:03.552 "trtype": "TCP", 00:19:03.552 "adrfam": "IPv4", 00:19:03.552 "traddr": "10.0.0.1", 00:19:03.552 "trsvcid": "42156" 00:19:03.552 }, 00:19:03.552 "auth": { 00:19:03.552 "state": "completed", 00:19:03.552 "digest": "sha256", 00:19:03.552 "dhgroup": "ffdhe3072" 00:19:03.552 } 00:19:03.552 } 00:19:03.552 ]' 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.552 13:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.810 13:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.743 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.000 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.258 00:19:05.515 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.515 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.515 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.772 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.772 13:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.772 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.772 13:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.772 { 00:19:05.772 "cntlid": 21, 00:19:05.772 "qid": 0, 00:19:05.772 "state": "enabled", 00:19:05.772 "thread": "nvmf_tgt_poll_group_000", 00:19:05.772 "listen_address": { 00:19:05.772 "trtype": "TCP", 00:19:05.772 "adrfam": "IPv4", 00:19:05.772 "traddr": "10.0.0.2", 00:19:05.772 "trsvcid": "4420" 00:19:05.772 }, 00:19:05.772 "peer_address": { 00:19:05.772 "trtype": "TCP", 00:19:05.772 "adrfam": "IPv4", 00:19:05.772 "traddr": "10.0.0.1", 00:19:05.772 "trsvcid": "42188" 00:19:05.772 }, 00:19:05.772 "auth": { 00:19:05.772 "state": "completed", 00:19:05.772 "digest": "sha256", 00:19:05.772 "dhgroup": "ffdhe3072" 00:19:05.772 } 00:19:05.772 } 00:19:05.772 ]' 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.772 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.029 13:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.961 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.219 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.477 00:19:07.477 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.477 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.477 13:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.734 { 00:19:07.734 "cntlid": 23, 00:19:07.734 "qid": 0, 00:19:07.734 "state": "enabled", 00:19:07.734 "thread": "nvmf_tgt_poll_group_000", 00:19:07.734 "listen_address": { 00:19:07.734 "trtype": "TCP", 00:19:07.734 "adrfam": "IPv4", 00:19:07.734 "traddr": "10.0.0.2", 00:19:07.734 "trsvcid": "4420" 00:19:07.734 }, 00:19:07.734 "peer_address": { 00:19:07.734 "trtype": "TCP", 00:19:07.734 "adrfam": "IPv4", 00:19:07.734 "traddr": "10.0.0.1", 00:19:07.734 "trsvcid": "42222" 00:19:07.734 }, 00:19:07.734 "auth": { 00:19:07.734 "state": "completed", 00:19:07.734 "digest": "sha256", 00:19:07.734 "dhgroup": "ffdhe3072" 00:19:07.734 } 00:19:07.734 } 00:19:07.734 ]' 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.734 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.992 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.992 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.992 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.249 13:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:09.181 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.181 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:09.181 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.181 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.181 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.182 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.182 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.182 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.182 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.439 13:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.697 00:19:09.697 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.697 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.697 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.954 { 00:19:09.954 "cntlid": 25, 00:19:09.954 "qid": 0, 00:19:09.954 "state": "enabled", 00:19:09.954 "thread": "nvmf_tgt_poll_group_000", 00:19:09.954 "listen_address": { 00:19:09.954 "trtype": "TCP", 00:19:09.954 "adrfam": "IPv4", 00:19:09.954 "traddr": "10.0.0.2", 00:19:09.954 "trsvcid": "4420" 00:19:09.954 }, 00:19:09.954 "peer_address": { 00:19:09.954 "trtype": "TCP", 00:19:09.954 "adrfam": "IPv4", 00:19:09.954 "traddr": "10.0.0.1", 00:19:09.954 "trsvcid": "48002" 00:19:09.954 }, 00:19:09.954 "auth": { 00:19:09.954 "state": "completed", 00:19:09.954 "digest": "sha256", 00:19:09.954 "dhgroup": "ffdhe4096" 00:19:09.954 } 00:19:09.954 } 00:19:09.954 ]' 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.954 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.212 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.212 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.212 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.212 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.212 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.470 13:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.402 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.659 13:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.915 00:19:11.915 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.915 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.915 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.177 { 00:19:12.177 "cntlid": 27, 00:19:12.177 "qid": 0, 00:19:12.177 "state": "enabled", 00:19:12.177 "thread": "nvmf_tgt_poll_group_000", 00:19:12.177 "listen_address": { 00:19:12.177 "trtype": "TCP", 00:19:12.177 "adrfam": "IPv4", 00:19:12.177 "traddr": "10.0.0.2", 00:19:12.177 "trsvcid": "4420" 00:19:12.177 }, 00:19:12.177 "peer_address": { 00:19:12.177 "trtype": "TCP", 00:19:12.177 "adrfam": "IPv4", 00:19:12.177 "traddr": "10.0.0.1", 00:19:12.177 "trsvcid": "48030" 00:19:12.177 }, 00:19:12.177 "auth": { 00:19:12.177 "state": "completed", 00:19:12.177 "digest": "sha256", 00:19:12.177 "dhgroup": "ffdhe4096" 00:19:12.177 } 00:19:12.177 } 00:19:12.177 ]' 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.177 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.488 13:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.420 13:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.677 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.241 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.241 { 00:19:14.241 "cntlid": 29, 00:19:14.241 "qid": 0, 00:19:14.241 "state": "enabled", 00:19:14.241 "thread": "nvmf_tgt_poll_group_000", 00:19:14.241 "listen_address": { 00:19:14.241 "trtype": "TCP", 00:19:14.241 "adrfam": "IPv4", 00:19:14.241 "traddr": "10.0.0.2", 00:19:14.241 "trsvcid": "4420" 00:19:14.241 }, 00:19:14.241 "peer_address": { 00:19:14.241 "trtype": "TCP", 00:19:14.241 "adrfam": "IPv4", 00:19:14.241 "traddr": "10.0.0.1", 00:19:14.241 "trsvcid": "48062" 00:19:14.241 }, 00:19:14.241 "auth": { 00:19:14.241 "state": "completed", 00:19:14.241 "digest": "sha256", 00:19:14.241 "dhgroup": "ffdhe4096" 00:19:14.241 } 00:19:14.241 } 00:19:14.241 ]' 00:19:14.241 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.498 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.499 13:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.756 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.685 13:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.943 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.200 00:19:16.457 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.457 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.457 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.714 { 00:19:16.714 "cntlid": 31, 00:19:16.714 "qid": 0, 00:19:16.714 "state": "enabled", 00:19:16.714 "thread": "nvmf_tgt_poll_group_000", 00:19:16.714 "listen_address": { 00:19:16.714 "trtype": "TCP", 00:19:16.714 "adrfam": "IPv4", 00:19:16.714 "traddr": "10.0.0.2", 00:19:16.714 "trsvcid": "4420" 00:19:16.714 }, 00:19:16.714 "peer_address": { 00:19:16.714 "trtype": "TCP", 00:19:16.714 "adrfam": "IPv4", 00:19:16.714 "traddr": "10.0.0.1", 00:19:16.714 "trsvcid": "48096" 00:19:16.714 }, 00:19:16.714 "auth": { 00:19:16.714 "state": "completed", 00:19:16.714 "digest": "sha256", 00:19:16.714 "dhgroup": "ffdhe4096" 00:19:16.714 } 00:19:16.714 } 00:19:16.714 ]' 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.714 13:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.714 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.714 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.714 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.714 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.714 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.971 13:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.902 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.159 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.723 00:19:18.723 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.723 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.723 13:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.981 { 00:19:18.981 "cntlid": 33, 00:19:18.981 "qid": 0, 00:19:18.981 "state": "enabled", 00:19:18.981 "thread": "nvmf_tgt_poll_group_000", 00:19:18.981 "listen_address": { 00:19:18.981 "trtype": "TCP", 00:19:18.981 "adrfam": "IPv4", 00:19:18.981 "traddr": "10.0.0.2", 00:19:18.981 "trsvcid": "4420" 00:19:18.981 }, 00:19:18.981 "peer_address": { 00:19:18.981 "trtype": "TCP", 00:19:18.981 "adrfam": "IPv4", 00:19:18.981 "traddr": "10.0.0.1", 00:19:18.981 "trsvcid": "48126" 00:19:18.981 }, 00:19:18.981 "auth": { 00:19:18.981 "state": "completed", 00:19:18.981 "digest": "sha256", 00:19:18.981 "dhgroup": "ffdhe6144" 00:19:18.981 } 00:19:18.981 } 00:19:18.981 ]' 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.981 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.239 13:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.170 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.428 13:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.993 00:19:20.994 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.994 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.994 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.251 { 00:19:21.251 "cntlid": 35, 00:19:21.251 "qid": 0, 00:19:21.251 "state": "enabled", 00:19:21.251 "thread": "nvmf_tgt_poll_group_000", 00:19:21.251 "listen_address": { 00:19:21.251 "trtype": "TCP", 00:19:21.251 "adrfam": "IPv4", 00:19:21.251 "traddr": "10.0.0.2", 00:19:21.251 "trsvcid": "4420" 00:19:21.251 }, 00:19:21.251 "peer_address": { 00:19:21.251 "trtype": "TCP", 00:19:21.251 "adrfam": "IPv4", 00:19:21.251 "traddr": "10.0.0.1", 00:19:21.251 "trsvcid": "51796" 00:19:21.251 }, 00:19:21.251 "auth": { 00:19:21.251 "state": "completed", 00:19:21.251 "digest": "sha256", 00:19:21.251 "dhgroup": "ffdhe6144" 00:19:21.251 } 00:19:21.251 } 00:19:21.251 ]' 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.251 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.508 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.508 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.508 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.508 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.508 13:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.766 13:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:22.699 13:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.699 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.956 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.522 00:19:23.522 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.522 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.522 13:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.780 { 00:19:23.780 "cntlid": 37, 00:19:23.780 "qid": 0, 00:19:23.780 "state": "enabled", 00:19:23.780 "thread": "nvmf_tgt_poll_group_000", 00:19:23.780 "listen_address": { 00:19:23.780 "trtype": "TCP", 00:19:23.780 "adrfam": "IPv4", 00:19:23.780 "traddr": "10.0.0.2", 00:19:23.780 "trsvcid": "4420" 00:19:23.780 }, 00:19:23.780 "peer_address": { 00:19:23.780 "trtype": "TCP", 00:19:23.780 "adrfam": "IPv4", 00:19:23.780 "traddr": "10.0.0.1", 00:19:23.780 "trsvcid": "51826" 00:19:23.780 }, 00:19:23.780 "auth": { 00:19:23.780 "state": "completed", 00:19:23.780 "digest": "sha256", 00:19:23.780 "dhgroup": "ffdhe6144" 00:19:23.780 } 00:19:23.780 } 00:19:23.780 ]' 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.780 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.346 13:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.278 13:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.884 00:19:25.884 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.884 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.884 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.142 { 00:19:26.142 "cntlid": 39, 00:19:26.142 "qid": 0, 00:19:26.142 "state": "enabled", 00:19:26.142 "thread": "nvmf_tgt_poll_group_000", 00:19:26.142 "listen_address": { 00:19:26.142 "trtype": "TCP", 00:19:26.142 "adrfam": "IPv4", 00:19:26.142 "traddr": "10.0.0.2", 00:19:26.142 "trsvcid": "4420" 00:19:26.142 }, 00:19:26.142 "peer_address": { 00:19:26.142 "trtype": "TCP", 00:19:26.142 "adrfam": "IPv4", 00:19:26.142 "traddr": "10.0.0.1", 00:19:26.142 "trsvcid": "51866" 00:19:26.142 }, 00:19:26.142 "auth": { 00:19:26.142 "state": "completed", 00:19:26.142 "digest": "sha256", 00:19:26.142 "dhgroup": "ffdhe6144" 00:19:26.142 } 00:19:26.142 } 00:19:26.142 ]' 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.142 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.399 13:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.331 13:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.589 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.523 00:19:28.523 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.523 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.523 13:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.781 { 00:19:28.781 "cntlid": 41, 00:19:28.781 "qid": 0, 00:19:28.781 "state": "enabled", 00:19:28.781 "thread": "nvmf_tgt_poll_group_000", 00:19:28.781 "listen_address": { 00:19:28.781 "trtype": "TCP", 00:19:28.781 "adrfam": "IPv4", 00:19:28.781 "traddr": "10.0.0.2", 00:19:28.781 "trsvcid": "4420" 00:19:28.781 }, 00:19:28.781 "peer_address": { 00:19:28.781 "trtype": "TCP", 00:19:28.781 "adrfam": "IPv4", 00:19:28.781 "traddr": "10.0.0.1", 00:19:28.781 "trsvcid": "51898" 00:19:28.781 }, 00:19:28.781 "auth": { 00:19:28.781 "state": "completed", 00:19:28.781 "digest": "sha256", 00:19:28.781 "dhgroup": "ffdhe8192" 00:19:28.781 } 00:19:28.781 } 00:19:28.781 ]' 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.781 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.039 13:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.970 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.228 13:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.159 00:19:31.159 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.159 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.159 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.415 { 00:19:31.415 "cntlid": 43, 00:19:31.415 "qid": 0, 00:19:31.415 "state": "enabled", 00:19:31.415 "thread": "nvmf_tgt_poll_group_000", 00:19:31.415 "listen_address": { 00:19:31.415 "trtype": "TCP", 00:19:31.415 "adrfam": "IPv4", 00:19:31.415 "traddr": "10.0.0.2", 00:19:31.415 "trsvcid": "4420" 00:19:31.415 }, 00:19:31.415 "peer_address": { 00:19:31.415 "trtype": "TCP", 00:19:31.415 "adrfam": "IPv4", 00:19:31.415 "traddr": "10.0.0.1", 00:19:31.415 "trsvcid": "37690" 00:19:31.415 }, 00:19:31.415 "auth": { 00:19:31.415 "state": "completed", 00:19:31.415 "digest": "sha256", 00:19:31.415 "dhgroup": "ffdhe8192" 00:19:31.415 } 00:19:31.415 } 00:19:31.415 ]' 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.415 13:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.671 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.603 13:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.860 13:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.791 00:19:33.791 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.791 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.791 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.048 { 00:19:34.048 "cntlid": 45, 00:19:34.048 "qid": 0, 00:19:34.048 "state": "enabled", 00:19:34.048 "thread": "nvmf_tgt_poll_group_000", 00:19:34.048 "listen_address": { 00:19:34.048 "trtype": "TCP", 00:19:34.048 "adrfam": "IPv4", 00:19:34.048 "traddr": "10.0.0.2", 00:19:34.048 "trsvcid": "4420" 00:19:34.048 }, 00:19:34.048 "peer_address": { 00:19:34.048 "trtype": "TCP", 00:19:34.048 "adrfam": "IPv4", 00:19:34.048 "traddr": "10.0.0.1", 00:19:34.048 "trsvcid": "37710" 00:19:34.048 }, 00:19:34.048 "auth": { 00:19:34.048 "state": "completed", 00:19:34.048 "digest": "sha256", 00:19:34.048 "dhgroup": "ffdhe8192" 00:19:34.048 } 00:19:34.048 } 00:19:34.048 ]' 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.048 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.306 13:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.236 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.492 13:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.421 00:19:36.421 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.421 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.421 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.678 { 00:19:36.678 "cntlid": 47, 00:19:36.678 "qid": 0, 00:19:36.678 "state": "enabled", 00:19:36.678 "thread": "nvmf_tgt_poll_group_000", 00:19:36.678 "listen_address": { 00:19:36.678 "trtype": "TCP", 00:19:36.678 "adrfam": "IPv4", 00:19:36.678 "traddr": "10.0.0.2", 00:19:36.678 "trsvcid": "4420" 00:19:36.678 }, 00:19:36.678 "peer_address": { 00:19:36.678 "trtype": "TCP", 00:19:36.678 "adrfam": "IPv4", 00:19:36.678 "traddr": "10.0.0.1", 00:19:36.678 "trsvcid": "37750" 00:19:36.678 }, 00:19:36.678 "auth": { 00:19:36.678 "state": "completed", 00:19:36.678 "digest": "sha256", 00:19:36.678 "dhgroup": "ffdhe8192" 00:19:36.678 } 00:19:36.678 } 00:19:36.678 ]' 00:19:36.678 13:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.678 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.935 13:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.867 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.125 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.383 00:19:38.383 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.383 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.383 13:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.641 { 00:19:38.641 "cntlid": 49, 00:19:38.641 "qid": 0, 00:19:38.641 "state": "enabled", 00:19:38.641 "thread": "nvmf_tgt_poll_group_000", 00:19:38.641 "listen_address": { 00:19:38.641 "trtype": "TCP", 00:19:38.641 "adrfam": "IPv4", 00:19:38.641 "traddr": "10.0.0.2", 00:19:38.641 "trsvcid": "4420" 00:19:38.641 }, 00:19:38.641 "peer_address": { 00:19:38.641 "trtype": "TCP", 00:19:38.641 "adrfam": "IPv4", 00:19:38.641 "traddr": "10.0.0.1", 00:19:38.641 "trsvcid": "37776" 00:19:38.641 }, 00:19:38.641 "auth": { 00:19:38.641 "state": "completed", 00:19:38.641 "digest": "sha384", 00:19:38.641 "dhgroup": "null" 00:19:38.641 } 00:19:38.641 } 00:19:38.641 ]' 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.641 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.899 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.899 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.899 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.899 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.899 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.157 13:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.136 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.394 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.652 00:19:40.652 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.652 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.652 13:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.910 { 00:19:40.910 "cntlid": 51, 00:19:40.910 "qid": 0, 00:19:40.910 "state": "enabled", 00:19:40.910 "thread": "nvmf_tgt_poll_group_000", 00:19:40.910 "listen_address": { 00:19:40.910 "trtype": "TCP", 00:19:40.910 "adrfam": "IPv4", 00:19:40.910 "traddr": "10.0.0.2", 00:19:40.910 "trsvcid": "4420" 00:19:40.910 }, 00:19:40.910 "peer_address": { 00:19:40.910 "trtype": "TCP", 00:19:40.910 "adrfam": "IPv4", 00:19:40.910 "traddr": "10.0.0.1", 00:19:40.910 "trsvcid": "54724" 00:19:40.910 }, 00:19:40.910 "auth": { 00:19:40.910 "state": "completed", 00:19:40.910 "digest": "sha384", 00:19:40.910 "dhgroup": "null" 00:19:40.910 } 00:19:40.910 } 00:19:40.910 ]' 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.910 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.168 13:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.101 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.359 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.617 00:19:42.617 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.617 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.617 13:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.874 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.875 { 00:19:42.875 "cntlid": 53, 00:19:42.875 "qid": 0, 00:19:42.875 "state": "enabled", 00:19:42.875 "thread": "nvmf_tgt_poll_group_000", 00:19:42.875 "listen_address": { 00:19:42.875 "trtype": "TCP", 00:19:42.875 "adrfam": "IPv4", 00:19:42.875 "traddr": "10.0.0.2", 00:19:42.875 "trsvcid": "4420" 00:19:42.875 }, 00:19:42.875 "peer_address": { 00:19:42.875 "trtype": "TCP", 00:19:42.875 "adrfam": "IPv4", 00:19:42.875 "traddr": "10.0.0.1", 00:19:42.875 "trsvcid": "54750" 00:19:42.875 }, 00:19:42.875 "auth": { 00:19:42.875 "state": "completed", 00:19:42.875 "digest": "sha384", 00:19:42.875 "dhgroup": "null" 00:19:42.875 } 00:19:42.875 } 00:19:42.875 ]' 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.875 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.132 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.132 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.132 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.390 13:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.324 13:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.889 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.889 { 00:19:44.889 "cntlid": 55, 00:19:44.889 "qid": 0, 00:19:44.889 "state": "enabled", 00:19:44.889 "thread": "nvmf_tgt_poll_group_000", 00:19:44.889 "listen_address": { 00:19:44.889 "trtype": "TCP", 00:19:44.889 "adrfam": "IPv4", 00:19:44.889 "traddr": "10.0.0.2", 00:19:44.889 "trsvcid": "4420" 00:19:44.889 }, 00:19:44.889 "peer_address": { 00:19:44.889 "trtype": "TCP", 00:19:44.889 "adrfam": "IPv4", 00:19:44.889 "traddr": "10.0.0.1", 00:19:44.889 "trsvcid": "54786" 00:19:44.889 }, 00:19:44.889 "auth": { 00:19:44.889 "state": "completed", 00:19:44.889 "digest": "sha384", 00:19:44.889 "dhgroup": "null" 00:19:44.889 } 00:19:44.889 } 00:19:44.889 ]' 00:19:44.889 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.145 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.401 13:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.333 13:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.899 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.899 { 00:19:46.899 "cntlid": 57, 00:19:46.899 "qid": 0, 00:19:46.899 "state": "enabled", 00:19:46.899 "thread": "nvmf_tgt_poll_group_000", 00:19:46.899 "listen_address": { 00:19:46.899 "trtype": "TCP", 00:19:46.899 "adrfam": "IPv4", 00:19:46.899 "traddr": "10.0.0.2", 00:19:46.899 "trsvcid": "4420" 00:19:46.899 }, 00:19:46.899 "peer_address": { 00:19:46.899 "trtype": "TCP", 00:19:46.899 "adrfam": "IPv4", 00:19:46.899 "traddr": "10.0.0.1", 00:19:46.899 "trsvcid": "54808" 00:19:46.899 }, 00:19:46.899 "auth": { 00:19:46.899 "state": "completed", 00:19:46.899 "digest": "sha384", 00:19:46.899 "dhgroup": "ffdhe2048" 00:19:46.899 } 00:19:46.899 } 00:19:46.899 ]' 00:19:46.899 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.157 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.415 13:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.346 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.604 13:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.861 00:19:48.861 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.862 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.862 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.140 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.140 { 00:19:49.140 "cntlid": 59, 00:19:49.141 "qid": 0, 00:19:49.141 "state": "enabled", 00:19:49.141 "thread": "nvmf_tgt_poll_group_000", 00:19:49.141 "listen_address": { 00:19:49.141 "trtype": "TCP", 00:19:49.141 "adrfam": "IPv4", 00:19:49.141 "traddr": "10.0.0.2", 00:19:49.141 "trsvcid": "4420" 00:19:49.141 }, 00:19:49.141 "peer_address": { 00:19:49.141 "trtype": "TCP", 00:19:49.141 "adrfam": "IPv4", 00:19:49.141 "traddr": "10.0.0.1", 00:19:49.141 "trsvcid": "54828" 00:19:49.141 }, 00:19:49.141 "auth": { 00:19:49.141 "state": "completed", 00:19:49.141 "digest": "sha384", 00:19:49.141 "dhgroup": "ffdhe2048" 00:19:49.141 } 00:19:49.141 } 00:19:49.141 ]' 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.141 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.399 13:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.332 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.591 13:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.849 00:19:50.849 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.849 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.849 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.107 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.107 { 00:19:51.107 "cntlid": 61, 00:19:51.107 "qid": 0, 00:19:51.107 "state": "enabled", 00:19:51.107 "thread": "nvmf_tgt_poll_group_000", 00:19:51.107 "listen_address": { 00:19:51.107 "trtype": "TCP", 00:19:51.107 "adrfam": "IPv4", 00:19:51.107 "traddr": "10.0.0.2", 00:19:51.107 "trsvcid": "4420" 00:19:51.107 }, 00:19:51.107 "peer_address": { 00:19:51.107 "trtype": "TCP", 00:19:51.107 "adrfam": "IPv4", 00:19:51.107 "traddr": "10.0.0.1", 00:19:51.107 "trsvcid": "55096" 00:19:51.107 }, 00:19:51.107 "auth": { 00:19:51.107 "state": "completed", 00:19:51.108 "digest": "sha384", 00:19:51.108 "dhgroup": "ffdhe2048" 00:19:51.108 } 00:19:51.108 } 00:19:51.108 ]' 00:19:51.108 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.108 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.108 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.366 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.366 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.366 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.366 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.366 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.624 13:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.557 13:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.815 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.072 00:19:53.073 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.073 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.073 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.330 { 00:19:53.330 "cntlid": 63, 00:19:53.330 "qid": 0, 00:19:53.330 "state": "enabled", 00:19:53.330 "thread": "nvmf_tgt_poll_group_000", 00:19:53.330 "listen_address": { 00:19:53.330 "trtype": "TCP", 00:19:53.330 "adrfam": "IPv4", 00:19:53.330 "traddr": "10.0.0.2", 00:19:53.330 "trsvcid": "4420" 00:19:53.330 }, 00:19:53.330 "peer_address": { 00:19:53.330 "trtype": "TCP", 00:19:53.330 "adrfam": "IPv4", 00:19:53.330 "traddr": "10.0.0.1", 00:19:53.330 "trsvcid": "55126" 00:19:53.330 }, 00:19:53.330 "auth": { 00:19:53.330 "state": "completed", 00:19:53.330 "digest": "sha384", 00:19:53.330 "dhgroup": "ffdhe2048" 00:19:53.330 } 00:19:53.330 } 00:19:53.330 ]' 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.330 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.637 13:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:19:54.607 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.608 13:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.866 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.126 00:19:55.126 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.126 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.126 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.384 { 00:19:55.384 "cntlid": 65, 00:19:55.384 "qid": 0, 00:19:55.384 "state": "enabled", 00:19:55.384 "thread": "nvmf_tgt_poll_group_000", 00:19:55.384 "listen_address": { 00:19:55.384 "trtype": "TCP", 00:19:55.384 "adrfam": "IPv4", 00:19:55.384 "traddr": "10.0.0.2", 00:19:55.384 "trsvcid": "4420" 00:19:55.384 }, 00:19:55.384 "peer_address": { 00:19:55.384 "trtype": "TCP", 00:19:55.384 "adrfam": "IPv4", 00:19:55.384 "traddr": "10.0.0.1", 00:19:55.384 "trsvcid": "55150" 00:19:55.384 }, 00:19:55.384 "auth": { 00:19:55.384 "state": "completed", 00:19:55.384 "digest": "sha384", 00:19:55.384 "dhgroup": "ffdhe3072" 00:19:55.384 } 00:19:55.384 } 00:19:55.384 ]' 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.384 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.642 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.642 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.642 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.642 13:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.899 13:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:19:56.831 13:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.831 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.089 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.347 00:19:57.347 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.347 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.347 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.605 { 00:19:57.605 "cntlid": 67, 00:19:57.605 "qid": 0, 00:19:57.605 "state": "enabled", 00:19:57.605 "thread": "nvmf_tgt_poll_group_000", 00:19:57.605 "listen_address": { 00:19:57.605 "trtype": "TCP", 00:19:57.605 "adrfam": "IPv4", 00:19:57.605 "traddr": "10.0.0.2", 00:19:57.605 "trsvcid": "4420" 00:19:57.605 }, 00:19:57.605 "peer_address": { 00:19:57.605 "trtype": "TCP", 00:19:57.605 "adrfam": "IPv4", 00:19:57.605 "traddr": "10.0.0.1", 00:19:57.605 "trsvcid": "55174" 00:19:57.605 }, 00:19:57.605 "auth": { 00:19:57.605 "state": "completed", 00:19:57.605 "digest": "sha384", 00:19:57.605 "dhgroup": "ffdhe3072" 00:19:57.605 } 00:19:57.605 } 00:19:57.605 ]' 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.605 13:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.605 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.605 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.605 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.605 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.605 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.862 13:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.796 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.054 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.618 00:19:59.618 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.618 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.618 13:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.876 { 00:19:59.876 "cntlid": 69, 00:19:59.876 "qid": 0, 00:19:59.876 "state": "enabled", 00:19:59.876 "thread": "nvmf_tgt_poll_group_000", 00:19:59.876 "listen_address": { 00:19:59.876 "trtype": "TCP", 00:19:59.876 "adrfam": "IPv4", 00:19:59.876 "traddr": "10.0.0.2", 00:19:59.876 "trsvcid": "4420" 00:19:59.876 }, 00:19:59.876 "peer_address": { 00:19:59.876 "trtype": "TCP", 00:19:59.876 "adrfam": "IPv4", 00:19:59.876 "traddr": "10.0.0.1", 00:19:59.876 "trsvcid": "35442" 00:19:59.876 }, 00:19:59.876 "auth": { 00:19:59.876 "state": "completed", 00:19:59.876 "digest": "sha384", 00:19:59.876 "dhgroup": "ffdhe3072" 00:19:59.876 } 00:19:59.876 } 00:19:59.876 ]' 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.876 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.134 13:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.068 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.325 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.326 13:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.890 00:20:01.890 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.890 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.890 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.147 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.148 { 00:20:02.148 "cntlid": 71, 00:20:02.148 "qid": 0, 00:20:02.148 "state": "enabled", 00:20:02.148 "thread": "nvmf_tgt_poll_group_000", 00:20:02.148 "listen_address": { 00:20:02.148 "trtype": "TCP", 00:20:02.148 "adrfam": "IPv4", 00:20:02.148 "traddr": "10.0.0.2", 00:20:02.148 "trsvcid": "4420" 00:20:02.148 }, 00:20:02.148 "peer_address": { 00:20:02.148 "trtype": "TCP", 00:20:02.148 "adrfam": "IPv4", 00:20:02.148 "traddr": "10.0.0.1", 00:20:02.148 "trsvcid": "35470" 00:20:02.148 }, 00:20:02.148 "auth": { 00:20:02.148 "state": "completed", 00:20:02.148 "digest": "sha384", 00:20:02.148 "dhgroup": "ffdhe3072" 00:20:02.148 } 00:20:02.148 } 00:20:02.148 ]' 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.148 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.406 13:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.340 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.600 13:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.600 13:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.600 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.600 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.164 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.164 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.164 { 00:20:04.164 "cntlid": 73, 00:20:04.164 "qid": 0, 00:20:04.164 "state": "enabled", 00:20:04.164 "thread": "nvmf_tgt_poll_group_000", 00:20:04.164 "listen_address": { 00:20:04.164 "trtype": "TCP", 00:20:04.164 "adrfam": "IPv4", 00:20:04.164 "traddr": "10.0.0.2", 00:20:04.164 "trsvcid": "4420" 00:20:04.164 }, 00:20:04.164 "peer_address": { 00:20:04.164 "trtype": "TCP", 00:20:04.164 "adrfam": "IPv4", 00:20:04.164 "traddr": "10.0.0.1", 00:20:04.164 "trsvcid": "35494" 00:20:04.164 }, 00:20:04.164 "auth": { 00:20:04.164 "state": "completed", 00:20:04.164 "digest": "sha384", 00:20:04.164 "dhgroup": "ffdhe4096" 00:20:04.164 } 00:20:04.164 } 00:20:04.164 ]' 00:20:04.165 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.421 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.678 13:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.610 13:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.867 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.123 00:20:06.123 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.123 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.123 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.380 { 00:20:06.380 "cntlid": 75, 00:20:06.380 "qid": 0, 00:20:06.380 "state": "enabled", 00:20:06.380 "thread": "nvmf_tgt_poll_group_000", 00:20:06.380 "listen_address": { 00:20:06.380 "trtype": "TCP", 00:20:06.380 "adrfam": "IPv4", 00:20:06.380 "traddr": "10.0.0.2", 00:20:06.380 "trsvcid": "4420" 00:20:06.380 }, 00:20:06.380 "peer_address": { 00:20:06.380 "trtype": "TCP", 00:20:06.380 "adrfam": "IPv4", 00:20:06.380 "traddr": "10.0.0.1", 00:20:06.380 "trsvcid": "35522" 00:20:06.380 }, 00:20:06.380 "auth": { 00:20:06.380 "state": "completed", 00:20:06.380 "digest": "sha384", 00:20:06.380 "dhgroup": "ffdhe4096" 00:20:06.380 } 00:20:06.380 } 00:20:06.380 ]' 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.380 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.636 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.636 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.636 13:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.893 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.853 13:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.853 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.854 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.854 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.419 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.419 { 00:20:08.419 "cntlid": 77, 00:20:08.419 "qid": 0, 00:20:08.419 "state": "enabled", 00:20:08.419 "thread": "nvmf_tgt_poll_group_000", 00:20:08.419 "listen_address": { 00:20:08.419 "trtype": "TCP", 00:20:08.419 "adrfam": "IPv4", 00:20:08.419 "traddr": "10.0.0.2", 00:20:08.419 "trsvcid": "4420" 00:20:08.419 }, 00:20:08.419 "peer_address": { 00:20:08.419 "trtype": "TCP", 00:20:08.419 "adrfam": "IPv4", 00:20:08.419 "traddr": "10.0.0.1", 00:20:08.419 "trsvcid": "35554" 00:20:08.419 }, 00:20:08.419 "auth": { 00:20:08.419 "state": "completed", 00:20:08.419 "digest": "sha384", 00:20:08.419 "dhgroup": "ffdhe4096" 00:20:08.419 } 00:20:08.419 } 00:20:08.419 ]' 00:20:08.419 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.676 13:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.933 13:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.865 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.429 00:20:10.429 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.429 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.429 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.685 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.685 { 00:20:10.685 "cntlid": 79, 00:20:10.685 "qid": 0, 00:20:10.685 "state": "enabled", 00:20:10.685 "thread": "nvmf_tgt_poll_group_000", 00:20:10.685 "listen_address": { 00:20:10.685 "trtype": "TCP", 00:20:10.685 "adrfam": "IPv4", 00:20:10.685 "traddr": "10.0.0.2", 00:20:10.685 "trsvcid": "4420" 00:20:10.685 }, 00:20:10.686 "peer_address": { 00:20:10.686 "trtype": "TCP", 00:20:10.686 "adrfam": "IPv4", 00:20:10.686 "traddr": "10.0.0.1", 00:20:10.686 "trsvcid": "56308" 00:20:10.686 }, 00:20:10.686 "auth": { 00:20:10.686 "state": "completed", 00:20:10.686 "digest": "sha384", 00:20:10.686 "dhgroup": "ffdhe4096" 00:20:10.686 } 00:20:10.686 } 00:20:10.686 ]' 00:20:10.686 13:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.686 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.942 13:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.873 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.148 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.714 00:20:12.714 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.714 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.714 13:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.971 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.971 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.971 13:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.971 13:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.972 { 00:20:12.972 "cntlid": 81, 00:20:12.972 "qid": 0, 00:20:12.972 "state": "enabled", 00:20:12.972 "thread": "nvmf_tgt_poll_group_000", 00:20:12.972 "listen_address": { 00:20:12.972 "trtype": "TCP", 00:20:12.972 "adrfam": "IPv4", 00:20:12.972 "traddr": "10.0.0.2", 00:20:12.972 "trsvcid": "4420" 00:20:12.972 }, 00:20:12.972 "peer_address": { 00:20:12.972 "trtype": "TCP", 00:20:12.972 "adrfam": "IPv4", 00:20:12.972 "traddr": "10.0.0.1", 00:20:12.972 "trsvcid": "56348" 00:20:12.972 }, 00:20:12.972 "auth": { 00:20:12.972 "state": "completed", 00:20:12.972 "digest": "sha384", 00:20:12.972 "dhgroup": "ffdhe6144" 00:20:12.972 } 00:20:12.972 } 00:20:12.972 ]' 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.972 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.229 13:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.161 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.418 13:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.982 00:20:14.982 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.982 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.982 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.239 { 00:20:15.239 "cntlid": 83, 00:20:15.239 "qid": 0, 00:20:15.239 "state": "enabled", 00:20:15.239 "thread": "nvmf_tgt_poll_group_000", 00:20:15.239 "listen_address": { 00:20:15.239 "trtype": "TCP", 00:20:15.239 "adrfam": "IPv4", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "trsvcid": "4420" 00:20:15.239 }, 00:20:15.239 "peer_address": { 00:20:15.239 "trtype": "TCP", 00:20:15.239 "adrfam": "IPv4", 00:20:15.239 "traddr": "10.0.0.1", 00:20:15.239 "trsvcid": "56384" 00:20:15.239 }, 00:20:15.239 "auth": { 00:20:15.239 "state": "completed", 00:20:15.239 "digest": "sha384", 00:20:15.239 "dhgroup": "ffdhe6144" 00:20:15.239 } 00:20:15.239 } 00:20:15.239 ]' 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.239 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.496 13:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.426 13:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.683 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.248 00:20:17.248 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.248 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.248 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.506 { 00:20:17.506 "cntlid": 85, 00:20:17.506 "qid": 0, 00:20:17.506 "state": "enabled", 00:20:17.506 "thread": "nvmf_tgt_poll_group_000", 00:20:17.506 "listen_address": { 00:20:17.506 "trtype": "TCP", 00:20:17.506 "adrfam": "IPv4", 00:20:17.506 "traddr": "10.0.0.2", 00:20:17.506 "trsvcid": "4420" 00:20:17.506 }, 00:20:17.506 "peer_address": { 00:20:17.506 "trtype": "TCP", 00:20:17.506 "adrfam": "IPv4", 00:20:17.506 "traddr": "10.0.0.1", 00:20:17.506 "trsvcid": "56404" 00:20:17.506 }, 00:20:17.506 "auth": { 00:20:17.506 "state": "completed", 00:20:17.506 "digest": "sha384", 00:20:17.506 "dhgroup": "ffdhe6144" 00:20:17.506 } 00:20:17.506 } 00:20:17.506 ]' 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.506 13:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.763 13:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.695 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.952 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.517 00:20:19.517 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.517 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.517 13:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.774 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.774 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.774 13:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.775 { 00:20:19.775 "cntlid": 87, 00:20:19.775 "qid": 0, 00:20:19.775 "state": "enabled", 00:20:19.775 "thread": "nvmf_tgt_poll_group_000", 00:20:19.775 "listen_address": { 00:20:19.775 "trtype": "TCP", 00:20:19.775 "adrfam": "IPv4", 00:20:19.775 "traddr": "10.0.0.2", 00:20:19.775 "trsvcid": "4420" 00:20:19.775 }, 00:20:19.775 "peer_address": { 00:20:19.775 "trtype": "TCP", 00:20:19.775 "adrfam": "IPv4", 00:20:19.775 "traddr": "10.0.0.1", 00:20:19.775 "trsvcid": "43322" 00:20:19.775 }, 00:20:19.775 "auth": { 00:20:19.775 "state": "completed", 00:20:19.775 "digest": "sha384", 00:20:19.775 "dhgroup": "ffdhe6144" 00:20:19.775 } 00:20:19.775 } 00:20:19.775 ]' 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.775 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.032 13:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.964 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.222 13:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.211 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.211 { 00:20:22.211 "cntlid": 89, 00:20:22.211 "qid": 0, 00:20:22.211 "state": "enabled", 00:20:22.211 "thread": "nvmf_tgt_poll_group_000", 00:20:22.211 "listen_address": { 00:20:22.211 "trtype": "TCP", 00:20:22.211 "adrfam": "IPv4", 00:20:22.211 "traddr": "10.0.0.2", 00:20:22.211 "trsvcid": "4420" 00:20:22.211 }, 00:20:22.211 "peer_address": { 00:20:22.211 "trtype": "TCP", 00:20:22.211 "adrfam": "IPv4", 00:20:22.211 "traddr": "10.0.0.1", 00:20:22.211 "trsvcid": "43338" 00:20:22.211 }, 00:20:22.211 "auth": { 00:20:22.211 "state": "completed", 00:20:22.211 "digest": "sha384", 00:20:22.211 "dhgroup": "ffdhe8192" 00:20:22.211 } 00:20:22.211 } 00:20:22.211 ]' 00:20:22.211 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.469 13:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.726 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.658 13:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.916 13:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.849 00:20:24.849 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.849 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.849 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.106 { 00:20:25.106 "cntlid": 91, 00:20:25.106 "qid": 0, 00:20:25.106 "state": "enabled", 00:20:25.106 "thread": "nvmf_tgt_poll_group_000", 00:20:25.106 "listen_address": { 00:20:25.106 "trtype": "TCP", 00:20:25.106 "adrfam": "IPv4", 00:20:25.106 "traddr": "10.0.0.2", 00:20:25.106 "trsvcid": "4420" 00:20:25.106 }, 00:20:25.106 "peer_address": { 00:20:25.106 "trtype": "TCP", 00:20:25.106 "adrfam": "IPv4", 00:20:25.106 "traddr": "10.0.0.1", 00:20:25.106 "trsvcid": "43356" 00:20:25.106 }, 00:20:25.106 "auth": { 00:20:25.106 "state": "completed", 00:20:25.106 "digest": "sha384", 00:20:25.106 "dhgroup": "ffdhe8192" 00:20:25.106 } 00:20:25.106 } 00:20:25.106 ]' 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.106 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.363 13:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.293 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.550 13:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.482 00:20:27.482 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.482 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.482 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.739 { 00:20:27.739 "cntlid": 93, 00:20:27.739 "qid": 0, 00:20:27.739 "state": "enabled", 00:20:27.739 "thread": "nvmf_tgt_poll_group_000", 00:20:27.739 "listen_address": { 00:20:27.739 "trtype": "TCP", 00:20:27.739 "adrfam": "IPv4", 00:20:27.739 "traddr": "10.0.0.2", 00:20:27.739 "trsvcid": "4420" 00:20:27.739 }, 00:20:27.739 "peer_address": { 00:20:27.739 "trtype": "TCP", 00:20:27.739 "adrfam": "IPv4", 00:20:27.739 "traddr": "10.0.0.1", 00:20:27.739 "trsvcid": "43382" 00:20:27.739 }, 00:20:27.739 "auth": { 00:20:27.739 "state": "completed", 00:20:27.739 "digest": "sha384", 00:20:27.739 "dhgroup": "ffdhe8192" 00:20:27.739 } 00:20:27.739 } 00:20:27.739 ]' 00:20:27.739 13:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.739 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.996 13:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.927 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.185 13:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.116 00:20:30.116 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.116 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.116 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.373 { 00:20:30.373 "cntlid": 95, 00:20:30.373 "qid": 0, 00:20:30.373 "state": "enabled", 00:20:30.373 "thread": "nvmf_tgt_poll_group_000", 00:20:30.373 "listen_address": { 00:20:30.373 "trtype": "TCP", 00:20:30.373 "adrfam": "IPv4", 00:20:30.373 "traddr": "10.0.0.2", 00:20:30.373 "trsvcid": "4420" 00:20:30.373 }, 00:20:30.373 "peer_address": { 00:20:30.373 "trtype": "TCP", 00:20:30.373 "adrfam": "IPv4", 00:20:30.373 "traddr": "10.0.0.1", 00:20:30.373 "trsvcid": "50586" 00:20:30.373 }, 00:20:30.373 "auth": { 00:20:30.373 "state": "completed", 00:20:30.373 "digest": "sha384", 00:20:30.373 "dhgroup": "ffdhe8192" 00:20:30.373 } 00:20:30.373 } 00:20:30.373 ]' 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.373 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.631 13:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.561 13:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.818 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.074 00:20:32.074 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.074 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.074 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.330 { 00:20:32.330 "cntlid": 97, 00:20:32.330 "qid": 0, 00:20:32.330 "state": "enabled", 00:20:32.330 "thread": "nvmf_tgt_poll_group_000", 00:20:32.330 "listen_address": { 00:20:32.330 "trtype": "TCP", 00:20:32.330 "adrfam": "IPv4", 00:20:32.330 "traddr": "10.0.0.2", 00:20:32.330 "trsvcid": "4420" 00:20:32.330 }, 00:20:32.330 "peer_address": { 00:20:32.330 "trtype": "TCP", 00:20:32.330 "adrfam": "IPv4", 00:20:32.330 "traddr": "10.0.0.1", 00:20:32.330 "trsvcid": "50618" 00:20:32.330 }, 00:20:32.330 "auth": { 00:20:32.330 "state": "completed", 00:20:32.330 "digest": "sha512", 00:20:32.330 "dhgroup": "null" 00:20:32.330 } 00:20:32.330 } 00:20:32.330 ]' 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.330 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.331 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.331 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.331 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.331 13:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.588 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.518 13:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.775 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.339 00:20:34.339 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.339 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.339 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.596 { 00:20:34.596 "cntlid": 99, 00:20:34.596 "qid": 0, 00:20:34.596 "state": "enabled", 00:20:34.596 "thread": "nvmf_tgt_poll_group_000", 00:20:34.596 "listen_address": { 00:20:34.596 "trtype": "TCP", 00:20:34.596 "adrfam": "IPv4", 00:20:34.596 "traddr": "10.0.0.2", 00:20:34.596 "trsvcid": "4420" 00:20:34.596 }, 00:20:34.596 "peer_address": { 00:20:34.596 "trtype": "TCP", 00:20:34.596 "adrfam": "IPv4", 00:20:34.596 "traddr": "10.0.0.1", 00:20:34.596 "trsvcid": "50646" 00:20:34.596 }, 00:20:34.596 "auth": { 00:20:34.596 "state": "completed", 00:20:34.596 "digest": "sha512", 00:20:34.596 "dhgroup": "null" 00:20:34.596 } 00:20:34.596 } 00:20:34.596 ]' 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.596 13:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.854 13:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.803 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.085 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.343 00:20:36.343 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.343 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.343 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.600 { 00:20:36.600 "cntlid": 101, 00:20:36.600 "qid": 0, 00:20:36.600 "state": "enabled", 00:20:36.600 "thread": "nvmf_tgt_poll_group_000", 00:20:36.600 "listen_address": { 00:20:36.600 "trtype": "TCP", 00:20:36.600 "adrfam": "IPv4", 00:20:36.600 "traddr": "10.0.0.2", 00:20:36.600 "trsvcid": "4420" 00:20:36.600 }, 00:20:36.600 "peer_address": { 00:20:36.600 "trtype": "TCP", 00:20:36.600 "adrfam": "IPv4", 00:20:36.600 "traddr": "10.0.0.1", 00:20:36.600 "trsvcid": "50664" 00:20:36.600 }, 00:20:36.600 "auth": { 00:20:36.600 "state": "completed", 00:20:36.600 "digest": "sha512", 00:20:36.600 "dhgroup": "null" 00:20:36.600 } 00:20:36.600 } 00:20:36.600 ]' 00:20:36.600 13:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.600 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.600 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.600 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.600 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.857 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.857 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.857 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.114 13:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.054 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.312 00:20:38.312 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.312 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.312 13:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.569 { 00:20:38.569 "cntlid": 103, 00:20:38.569 "qid": 0, 00:20:38.569 "state": "enabled", 00:20:38.569 "thread": "nvmf_tgt_poll_group_000", 00:20:38.569 "listen_address": { 00:20:38.569 "trtype": "TCP", 00:20:38.569 "adrfam": "IPv4", 00:20:38.569 "traddr": "10.0.0.2", 00:20:38.569 "trsvcid": "4420" 00:20:38.569 }, 00:20:38.569 "peer_address": { 00:20:38.569 "trtype": "TCP", 00:20:38.569 "adrfam": "IPv4", 00:20:38.569 "traddr": "10.0.0.1", 00:20:38.569 "trsvcid": "50678" 00:20:38.569 }, 00:20:38.569 "auth": { 00:20:38.569 "state": "completed", 00:20:38.569 "digest": "sha512", 00:20:38.569 "dhgroup": "null" 00:20:38.569 } 00:20:38.569 } 00:20:38.569 ]' 00:20:38.569 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.827 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.085 13:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:40.017 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.017 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.018 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.275 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.532 00:20:40.532 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.532 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.532 13:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.789 { 00:20:40.789 "cntlid": 105, 00:20:40.789 "qid": 0, 00:20:40.789 "state": "enabled", 00:20:40.789 "thread": "nvmf_tgt_poll_group_000", 00:20:40.789 "listen_address": { 00:20:40.789 "trtype": "TCP", 00:20:40.789 "adrfam": "IPv4", 00:20:40.789 "traddr": "10.0.0.2", 00:20:40.789 "trsvcid": "4420" 00:20:40.789 }, 00:20:40.789 "peer_address": { 00:20:40.789 "trtype": "TCP", 00:20:40.789 "adrfam": "IPv4", 00:20:40.789 "traddr": "10.0.0.1", 00:20:40.789 "trsvcid": "54526" 00:20:40.789 }, 00:20:40.789 "auth": { 00:20:40.789 "state": "completed", 00:20:40.789 "digest": "sha512", 00:20:40.789 "dhgroup": "ffdhe2048" 00:20:40.789 } 00:20:40.789 } 00:20:40.789 ]' 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.789 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.047 13:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.979 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.237 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.494 00:20:42.494 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.494 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.494 13:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.751 { 00:20:42.751 "cntlid": 107, 00:20:42.751 "qid": 0, 00:20:42.751 "state": "enabled", 00:20:42.751 "thread": "nvmf_tgt_poll_group_000", 00:20:42.751 "listen_address": { 00:20:42.751 "trtype": "TCP", 00:20:42.751 "adrfam": "IPv4", 00:20:42.751 "traddr": "10.0.0.2", 00:20:42.751 "trsvcid": "4420" 00:20:42.751 }, 00:20:42.751 "peer_address": { 00:20:42.751 "trtype": "TCP", 00:20:42.751 "adrfam": "IPv4", 00:20:42.751 "traddr": "10.0.0.1", 00:20:42.751 "trsvcid": "54556" 00:20:42.751 }, 00:20:42.751 "auth": { 00:20:42.751 "state": "completed", 00:20:42.751 "digest": "sha512", 00:20:42.751 "dhgroup": "ffdhe2048" 00:20:42.751 } 00:20:42.751 } 00:20:42.751 ]' 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.751 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.008 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.008 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.008 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.008 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.008 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.265 13:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.196 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.197 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.197 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.454 13:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.711 00:20:44.711 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.711 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.711 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.968 { 00:20:44.968 "cntlid": 109, 00:20:44.968 "qid": 0, 00:20:44.968 "state": "enabled", 00:20:44.968 "thread": "nvmf_tgt_poll_group_000", 00:20:44.968 "listen_address": { 00:20:44.968 "trtype": "TCP", 00:20:44.968 "adrfam": "IPv4", 00:20:44.968 "traddr": "10.0.0.2", 00:20:44.968 "trsvcid": "4420" 00:20:44.968 }, 00:20:44.968 "peer_address": { 00:20:44.968 "trtype": "TCP", 00:20:44.968 "adrfam": "IPv4", 00:20:44.968 "traddr": "10.0.0.1", 00:20:44.968 "trsvcid": "54568" 00:20:44.968 }, 00:20:44.968 "auth": { 00:20:44.968 "state": "completed", 00:20:44.968 "digest": "sha512", 00:20:44.968 "dhgroup": "ffdhe2048" 00:20:44.968 } 00:20:44.968 } 00:20:44.968 ]' 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.968 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.225 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.225 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.225 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.225 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.225 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.483 13:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.412 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.669 13:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.926 00:20:46.926 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.926 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.926 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.182 { 00:20:47.182 "cntlid": 111, 00:20:47.182 "qid": 0, 00:20:47.182 "state": "enabled", 00:20:47.182 "thread": "nvmf_tgt_poll_group_000", 00:20:47.182 "listen_address": { 00:20:47.182 "trtype": "TCP", 00:20:47.182 "adrfam": "IPv4", 00:20:47.182 "traddr": "10.0.0.2", 00:20:47.182 "trsvcid": "4420" 00:20:47.182 }, 00:20:47.182 "peer_address": { 00:20:47.182 "trtype": "TCP", 00:20:47.182 "adrfam": "IPv4", 00:20:47.182 "traddr": "10.0.0.1", 00:20:47.182 "trsvcid": "54610" 00:20:47.182 }, 00:20:47.182 "auth": { 00:20:47.182 "state": "completed", 00:20:47.182 "digest": "sha512", 00:20:47.182 "dhgroup": "ffdhe2048" 00:20:47.182 } 00:20:47.182 } 00:20:47.182 ]' 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.182 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.439 13:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.370 13:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.627 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.191 00:20:49.191 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.191 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.191 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.499 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.499 { 00:20:49.500 "cntlid": 113, 00:20:49.500 "qid": 0, 00:20:49.500 "state": "enabled", 00:20:49.500 "thread": "nvmf_tgt_poll_group_000", 00:20:49.500 "listen_address": { 00:20:49.500 "trtype": "TCP", 00:20:49.500 "adrfam": "IPv4", 00:20:49.500 "traddr": "10.0.0.2", 00:20:49.500 "trsvcid": "4420" 00:20:49.500 }, 00:20:49.500 "peer_address": { 00:20:49.500 "trtype": "TCP", 00:20:49.500 "adrfam": "IPv4", 00:20:49.500 "traddr": "10.0.0.1", 00:20:49.500 "trsvcid": "55220" 00:20:49.500 }, 00:20:49.500 "auth": { 00:20:49.500 "state": "completed", 00:20:49.500 "digest": "sha512", 00:20:49.500 "dhgroup": "ffdhe3072" 00:20:49.500 } 00:20:49.500 } 00:20:49.500 ]' 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.500 13:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.757 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.689 13:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.947 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.205 00:20:51.205 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.205 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.205 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.463 { 00:20:51.463 "cntlid": 115, 00:20:51.463 "qid": 0, 00:20:51.463 "state": "enabled", 00:20:51.463 "thread": "nvmf_tgt_poll_group_000", 00:20:51.463 "listen_address": { 00:20:51.463 "trtype": "TCP", 00:20:51.463 "adrfam": "IPv4", 00:20:51.463 "traddr": "10.0.0.2", 00:20:51.463 "trsvcid": "4420" 00:20:51.463 }, 00:20:51.463 "peer_address": { 00:20:51.463 "trtype": "TCP", 00:20:51.463 "adrfam": "IPv4", 00:20:51.463 "traddr": "10.0.0.1", 00:20:51.463 "trsvcid": "55248" 00:20:51.463 }, 00:20:51.463 "auth": { 00:20:51.463 "state": "completed", 00:20:51.463 "digest": "sha512", 00:20:51.463 "dhgroup": "ffdhe3072" 00:20:51.463 } 00:20:51.463 } 00:20:51.463 ]' 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.463 13:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.721 13:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.092 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.350 00:20:53.350 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.608 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.608 13:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.608 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.608 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.608 13:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.608 13:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.864 { 00:20:53.864 "cntlid": 117, 00:20:53.864 "qid": 0, 00:20:53.864 "state": "enabled", 00:20:53.864 "thread": "nvmf_tgt_poll_group_000", 00:20:53.864 "listen_address": { 00:20:53.864 "trtype": "TCP", 00:20:53.864 "adrfam": "IPv4", 00:20:53.864 "traddr": "10.0.0.2", 00:20:53.864 "trsvcid": "4420" 00:20:53.864 }, 00:20:53.864 "peer_address": { 00:20:53.864 "trtype": "TCP", 00:20:53.864 "adrfam": "IPv4", 00:20:53.864 "traddr": "10.0.0.1", 00:20:53.864 "trsvcid": "55266" 00:20:53.864 }, 00:20:53.864 "auth": { 00:20:53.864 "state": "completed", 00:20:53.864 "digest": "sha512", 00:20:53.864 "dhgroup": "ffdhe3072" 00:20:53.864 } 00:20:53.864 } 00:20:53.864 ]' 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.864 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.865 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.865 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.121 13:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.050 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.307 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.564 00:20:55.564 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.564 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.564 13:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.820 { 00:20:55.820 "cntlid": 119, 00:20:55.820 "qid": 0, 00:20:55.820 "state": "enabled", 00:20:55.820 "thread": "nvmf_tgt_poll_group_000", 00:20:55.820 "listen_address": { 00:20:55.820 "trtype": "TCP", 00:20:55.820 "adrfam": "IPv4", 00:20:55.820 "traddr": "10.0.0.2", 00:20:55.820 "trsvcid": "4420" 00:20:55.820 }, 00:20:55.820 "peer_address": { 00:20:55.820 "trtype": "TCP", 00:20:55.820 "adrfam": "IPv4", 00:20:55.820 "traddr": "10.0.0.1", 00:20:55.820 "trsvcid": "55298" 00:20:55.820 }, 00:20:55.820 "auth": { 00:20:55.820 "state": "completed", 00:20:55.820 "digest": "sha512", 00:20:55.820 "dhgroup": "ffdhe3072" 00:20:55.820 } 00:20:55.820 } 00:20:55.820 ]' 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.820 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.087 13:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.018 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.275 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.532 00:20:57.532 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.532 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.532 13:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.790 { 00:20:57.790 "cntlid": 121, 00:20:57.790 "qid": 0, 00:20:57.790 "state": "enabled", 00:20:57.790 "thread": "nvmf_tgt_poll_group_000", 00:20:57.790 "listen_address": { 00:20:57.790 "trtype": "TCP", 00:20:57.790 "adrfam": "IPv4", 00:20:57.790 "traddr": "10.0.0.2", 00:20:57.790 "trsvcid": "4420" 00:20:57.790 }, 00:20:57.790 "peer_address": { 00:20:57.790 "trtype": "TCP", 00:20:57.790 "adrfam": "IPv4", 00:20:57.790 "traddr": "10.0.0.1", 00:20:57.790 "trsvcid": "55316" 00:20:57.790 }, 00:20:57.790 "auth": { 00:20:57.790 "state": "completed", 00:20:57.790 "digest": "sha512", 00:20:57.790 "dhgroup": "ffdhe4096" 00:20:57.790 } 00:20:57.790 } 00:20:57.790 ]' 00:20:57.790 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.047 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.305 13:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.238 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.496 13:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.753 00:20:59.753 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.753 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.753 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.010 { 00:21:00.010 "cntlid": 123, 00:21:00.010 "qid": 0, 00:21:00.010 "state": "enabled", 00:21:00.010 "thread": "nvmf_tgt_poll_group_000", 00:21:00.010 "listen_address": { 00:21:00.010 "trtype": "TCP", 00:21:00.010 "adrfam": "IPv4", 00:21:00.010 "traddr": "10.0.0.2", 00:21:00.010 "trsvcid": "4420" 00:21:00.010 }, 00:21:00.010 "peer_address": { 00:21:00.010 "trtype": "TCP", 00:21:00.010 "adrfam": "IPv4", 00:21:00.010 "traddr": "10.0.0.1", 00:21:00.010 "trsvcid": "38310" 00:21:00.010 }, 00:21:00.010 "auth": { 00:21:00.010 "state": "completed", 00:21:00.010 "digest": "sha512", 00:21:00.010 "dhgroup": "ffdhe4096" 00:21:00.010 } 00:21:00.010 } 00:21:00.010 ]' 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.010 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.267 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.267 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.267 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.524 13:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.457 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.714 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.715 13:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.972 00:21:01.972 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.972 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.972 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.229 { 00:21:02.229 "cntlid": 125, 00:21:02.229 "qid": 0, 00:21:02.229 "state": "enabled", 00:21:02.229 "thread": "nvmf_tgt_poll_group_000", 00:21:02.229 "listen_address": { 00:21:02.229 "trtype": "TCP", 00:21:02.229 "adrfam": "IPv4", 00:21:02.229 "traddr": "10.0.0.2", 00:21:02.229 "trsvcid": "4420" 00:21:02.229 }, 00:21:02.229 "peer_address": { 00:21:02.229 "trtype": "TCP", 00:21:02.229 "adrfam": "IPv4", 00:21:02.229 "traddr": "10.0.0.1", 00:21:02.229 "trsvcid": "38336" 00:21:02.229 }, 00:21:02.229 "auth": { 00:21:02.229 "state": "completed", 00:21:02.229 "digest": "sha512", 00:21:02.229 "dhgroup": "ffdhe4096" 00:21:02.229 } 00:21:02.229 } 00:21:02.229 ]' 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.229 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.486 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.486 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.486 13:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.744 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.741 13:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.741 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.306 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.306 { 00:21:04.306 "cntlid": 127, 00:21:04.306 "qid": 0, 00:21:04.306 "state": "enabled", 00:21:04.306 "thread": "nvmf_tgt_poll_group_000", 00:21:04.306 "listen_address": { 00:21:04.306 "trtype": "TCP", 00:21:04.306 "adrfam": "IPv4", 00:21:04.306 "traddr": "10.0.0.2", 00:21:04.306 "trsvcid": "4420" 00:21:04.306 }, 00:21:04.306 "peer_address": { 00:21:04.306 "trtype": "TCP", 00:21:04.306 "adrfam": "IPv4", 00:21:04.306 "traddr": "10.0.0.1", 00:21:04.306 "trsvcid": "38378" 00:21:04.306 }, 00:21:04.306 "auth": { 00:21:04.306 "state": "completed", 00:21:04.306 "digest": "sha512", 00:21:04.306 "dhgroup": "ffdhe4096" 00:21:04.306 } 00:21:04.306 } 00:21:04.306 ]' 00:21:04.306 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.564 13:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.821 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.755 13:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.755 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.319 00:21:06.319 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.319 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.319 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.577 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.577 13:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.577 13:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.577 13:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.577 13:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.577 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.577 { 00:21:06.577 "cntlid": 129, 00:21:06.577 "qid": 0, 00:21:06.577 "state": "enabled", 00:21:06.577 "thread": "nvmf_tgt_poll_group_000", 00:21:06.577 "listen_address": { 00:21:06.577 "trtype": "TCP", 00:21:06.577 "adrfam": "IPv4", 00:21:06.577 "traddr": "10.0.0.2", 00:21:06.577 "trsvcid": "4420" 00:21:06.577 }, 00:21:06.577 "peer_address": { 00:21:06.577 "trtype": "TCP", 00:21:06.577 "adrfam": "IPv4", 00:21:06.577 "traddr": "10.0.0.1", 00:21:06.577 "trsvcid": "38414" 00:21:06.577 }, 00:21:06.577 "auth": { 00:21:06.577 "state": "completed", 00:21:06.577 "digest": "sha512", 00:21:06.577 "dhgroup": "ffdhe6144" 00:21:06.577 } 00:21:06.577 } 00:21:06.577 ]' 00:21:06.577 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.834 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.091 13:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.023 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.281 13:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.844 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.844 { 00:21:08.844 "cntlid": 131, 00:21:08.844 "qid": 0, 00:21:08.844 "state": "enabled", 00:21:08.844 "thread": "nvmf_tgt_poll_group_000", 00:21:08.844 "listen_address": { 00:21:08.844 "trtype": "TCP", 00:21:08.844 "adrfam": "IPv4", 00:21:08.844 "traddr": "10.0.0.2", 00:21:08.844 "trsvcid": "4420" 00:21:08.844 }, 00:21:08.844 "peer_address": { 00:21:08.844 "trtype": "TCP", 00:21:08.844 "adrfam": "IPv4", 00:21:08.844 "traddr": "10.0.0.1", 00:21:08.844 "trsvcid": "38436" 00:21:08.844 }, 00:21:08.844 "auth": { 00:21:08.844 "state": "completed", 00:21:08.844 "digest": "sha512", 00:21:08.844 "dhgroup": "ffdhe6144" 00:21:08.844 } 00:21:08.844 } 00:21:08.844 ]' 00:21:08.844 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.101 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.358 13:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.289 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.546 13:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.110 00:21:11.110 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.110 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.110 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.367 { 00:21:11.367 "cntlid": 133, 00:21:11.367 "qid": 0, 00:21:11.367 "state": "enabled", 00:21:11.367 "thread": "nvmf_tgt_poll_group_000", 00:21:11.367 "listen_address": { 00:21:11.367 "trtype": "TCP", 00:21:11.367 "adrfam": "IPv4", 00:21:11.367 "traddr": "10.0.0.2", 00:21:11.367 "trsvcid": "4420" 00:21:11.367 }, 00:21:11.367 "peer_address": { 00:21:11.367 "trtype": "TCP", 00:21:11.367 "adrfam": "IPv4", 00:21:11.367 "traddr": "10.0.0.1", 00:21:11.367 "trsvcid": "40634" 00:21:11.367 }, 00:21:11.367 "auth": { 00:21:11.367 "state": "completed", 00:21:11.367 "digest": "sha512", 00:21:11.367 "dhgroup": "ffdhe6144" 00:21:11.367 } 00:21:11.367 } 00:21:11.367 ]' 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.367 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.624 13:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.556 13:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.814 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.380 00:21:13.380 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.380 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.380 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.639 { 00:21:13.639 "cntlid": 135, 00:21:13.639 "qid": 0, 00:21:13.639 "state": "enabled", 00:21:13.639 "thread": "nvmf_tgt_poll_group_000", 00:21:13.639 "listen_address": { 00:21:13.639 "trtype": "TCP", 00:21:13.639 "adrfam": "IPv4", 00:21:13.639 "traddr": "10.0.0.2", 00:21:13.639 "trsvcid": "4420" 00:21:13.639 }, 00:21:13.639 "peer_address": { 00:21:13.639 "trtype": "TCP", 00:21:13.639 "adrfam": "IPv4", 00:21:13.639 "traddr": "10.0.0.1", 00:21:13.639 "trsvcid": "40666" 00:21:13.639 }, 00:21:13.639 "auth": { 00:21:13.639 "state": "completed", 00:21:13.639 "digest": "sha512", 00:21:13.639 "dhgroup": "ffdhe6144" 00:21:13.639 } 00:21:13.639 } 00:21:13.639 ]' 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.639 13:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.639 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.639 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.639 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.639 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.639 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.896 13:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.830 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.395 13:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.960 00:21:16.217 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.217 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.217 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.475 { 00:21:16.475 "cntlid": 137, 00:21:16.475 "qid": 0, 00:21:16.475 "state": "enabled", 00:21:16.475 "thread": "nvmf_tgt_poll_group_000", 00:21:16.475 "listen_address": { 00:21:16.475 "trtype": "TCP", 00:21:16.475 "adrfam": "IPv4", 00:21:16.475 "traddr": "10.0.0.2", 00:21:16.475 "trsvcid": "4420" 00:21:16.475 }, 00:21:16.475 "peer_address": { 00:21:16.475 "trtype": "TCP", 00:21:16.475 "adrfam": "IPv4", 00:21:16.475 "traddr": "10.0.0.1", 00:21:16.475 "trsvcid": "40688" 00:21:16.475 }, 00:21:16.475 "auth": { 00:21:16.475 "state": "completed", 00:21:16.475 "digest": "sha512", 00:21:16.475 "dhgroup": "ffdhe8192" 00:21:16.475 } 00:21:16.475 } 00:21:16.475 ]' 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.475 13:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.733 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.704 13:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.962 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:17.962 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.963 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.528 00:21:18.528 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.528 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.528 13:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.785 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.785 { 00:21:18.785 "cntlid": 139, 00:21:18.785 "qid": 0, 00:21:18.785 "state": "enabled", 00:21:18.785 "thread": "nvmf_tgt_poll_group_000", 00:21:18.785 "listen_address": { 00:21:18.785 "trtype": "TCP", 00:21:18.786 "adrfam": "IPv4", 00:21:18.786 "traddr": "10.0.0.2", 00:21:18.786 "trsvcid": "4420" 00:21:18.786 }, 00:21:18.786 "peer_address": { 00:21:18.786 "trtype": "TCP", 00:21:18.786 "adrfam": "IPv4", 00:21:18.786 "traddr": "10.0.0.1", 00:21:18.786 "trsvcid": "40698" 00:21:18.786 }, 00:21:18.786 "auth": { 00:21:18.786 "state": "completed", 00:21:18.786 "digest": "sha512", 00:21:18.786 "dhgroup": "ffdhe8192" 00:21:18.786 } 00:21:18.786 } 00:21:18.786 ]' 00:21:18.786 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.042 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.299 13:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:M2IwODBiYzU4OWYzODAwMDMzNDg2MmRjNzEyZjE2NzCyJcb0: --dhchap-ctrl-secret DHHC-1:02:YTI3YjNjMDgwNDlhNDMwMjJiMDM4NmE5YzEzYTNjMjQ2NDRjZGY5MWZkNjk0ODk2WtS/Jw==: 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.230 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.487 13:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.418 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.418 { 00:21:21.418 "cntlid": 141, 00:21:21.418 "qid": 0, 00:21:21.418 "state": "enabled", 00:21:21.418 "thread": "nvmf_tgt_poll_group_000", 00:21:21.418 "listen_address": { 00:21:21.418 "trtype": "TCP", 00:21:21.418 "adrfam": "IPv4", 00:21:21.418 "traddr": "10.0.0.2", 00:21:21.418 "trsvcid": "4420" 00:21:21.418 }, 00:21:21.418 "peer_address": { 00:21:21.418 "trtype": "TCP", 00:21:21.418 "adrfam": "IPv4", 00:21:21.418 "traddr": "10.0.0.1", 00:21:21.418 "trsvcid": "49572" 00:21:21.418 }, 00:21:21.418 "auth": { 00:21:21.418 "state": "completed", 00:21:21.418 "digest": "sha512", 00:21:21.418 "dhgroup": "ffdhe8192" 00:21:21.418 } 00:21:21.418 } 00:21:21.418 ]' 00:21:21.418 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.675 13:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.932 13:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:Njg0YmQ0ZmU2NzUwMmJkNWFhZTQzYjM4NzQ2ZjlmMmM4ZWZhYjhjYmRhZGE1ZWZlBTKu8g==: --dhchap-ctrl-secret DHHC-1:01:ZWU4NTkyYzFjNzI2NTMwNGMxNTYxMmM5YzhmZmU3YWQbe/3e: 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.904 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.161 13:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.161 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.161 13:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.725 00:21:23.725 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.726 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.726 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.983 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.983 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.983 13:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.983 13:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.240 { 00:21:24.240 "cntlid": 143, 00:21:24.240 "qid": 0, 00:21:24.240 "state": "enabled", 00:21:24.240 "thread": "nvmf_tgt_poll_group_000", 00:21:24.240 "listen_address": { 00:21:24.240 "trtype": "TCP", 00:21:24.240 "adrfam": "IPv4", 00:21:24.240 "traddr": "10.0.0.2", 00:21:24.240 "trsvcid": "4420" 00:21:24.240 }, 00:21:24.240 "peer_address": { 00:21:24.240 "trtype": "TCP", 00:21:24.240 "adrfam": "IPv4", 00:21:24.240 "traddr": "10.0.0.1", 00:21:24.240 "trsvcid": "49594" 00:21:24.240 }, 00:21:24.240 "auth": { 00:21:24.240 "state": "completed", 00:21:24.240 "digest": "sha512", 00:21:24.240 "dhgroup": "ffdhe8192" 00:21:24.240 } 00:21:24.240 } 00:21:24.240 ]' 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.240 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.496 13:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:25.451 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.452 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.452 13:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.709 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.640 00:21:26.640 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.640 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.640 13:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.898 { 00:21:26.898 "cntlid": 145, 00:21:26.898 "qid": 0, 00:21:26.898 "state": "enabled", 00:21:26.898 "thread": "nvmf_tgt_poll_group_000", 00:21:26.898 "listen_address": { 00:21:26.898 "trtype": "TCP", 00:21:26.898 "adrfam": "IPv4", 00:21:26.898 "traddr": "10.0.0.2", 00:21:26.898 "trsvcid": "4420" 00:21:26.898 }, 00:21:26.898 "peer_address": { 00:21:26.898 "trtype": "TCP", 00:21:26.898 "adrfam": "IPv4", 00:21:26.898 "traddr": "10.0.0.1", 00:21:26.898 "trsvcid": "49614" 00:21:26.898 }, 00:21:26.898 "auth": { 00:21:26.898 "state": "completed", 00:21:26.898 "digest": "sha512", 00:21:26.898 "dhgroup": "ffdhe8192" 00:21:26.898 } 00:21:26.898 } 00:21:26.898 ]' 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.898 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.156 13:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZTBmZDM4ZTliYzQzZTgyOWZkYzZjZjZhZTEwM2QzMzQ2MWJjNDZlY2M3NmJjNjA0W+OHMg==: --dhchap-ctrl-secret DHHC-1:03:YWY5NWRiNjU2OWU4ZmJkZjU0NmU2ZTdjMDdkY2I4MTU1NjgzYmEwNmZiYmM0NTVjYjQ2ZjgwNzhiYTA5Y2U2ZMxWwdU=: 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.089 13:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:29.022 request: 00:21:29.022 { 00:21:29.022 "name": "nvme0", 00:21:29.022 "trtype": "tcp", 00:21:29.022 "traddr": "10.0.0.2", 00:21:29.022 "adrfam": "ipv4", 00:21:29.022 "trsvcid": "4420", 00:21:29.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:29.022 "prchk_reftag": false, 00:21:29.022 "prchk_guard": false, 00:21:29.022 "hdgst": false, 00:21:29.022 "ddgst": false, 00:21:29.022 "dhchap_key": "key2", 00:21:29.022 "method": "bdev_nvme_attach_controller", 00:21:29.022 "req_id": 1 00:21:29.022 } 00:21:29.022 Got JSON-RPC error response 00:21:29.022 response: 00:21:29.022 { 00:21:29.022 "code": -5, 00:21:29.022 "message": "Input/output error" 00:21:29.022 } 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:29.022 13:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:29.956 request: 00:21:29.956 { 00:21:29.956 "name": "nvme0", 00:21:29.956 "trtype": "tcp", 00:21:29.956 "traddr": "10.0.0.2", 00:21:29.956 "adrfam": "ipv4", 00:21:29.956 "trsvcid": "4420", 00:21:29.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:29.956 "prchk_reftag": false, 00:21:29.956 "prchk_guard": false, 00:21:29.956 "hdgst": false, 00:21:29.956 "ddgst": false, 00:21:29.956 "dhchap_key": "key1", 00:21:29.956 "dhchap_ctrlr_key": "ckey2", 00:21:29.956 "method": "bdev_nvme_attach_controller", 00:21:29.956 "req_id": 1 00:21:29.956 } 00:21:29.956 Got JSON-RPC error response 00:21:29.956 response: 00:21:29.956 { 00:21:29.956 "code": -5, 00:21:29.956 "message": "Input/output error" 00:21:29.956 } 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.956 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.521 request: 00:21:30.521 { 00:21:30.521 "name": "nvme0", 00:21:30.521 "trtype": "tcp", 00:21:30.521 "traddr": "10.0.0.2", 00:21:30.521 "adrfam": "ipv4", 00:21:30.521 "trsvcid": "4420", 00:21:30.521 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:30.521 "prchk_reftag": false, 00:21:30.521 "prchk_guard": false, 00:21:30.521 "hdgst": false, 00:21:30.521 "ddgst": false, 00:21:30.521 "dhchap_key": "key1", 00:21:30.521 "dhchap_ctrlr_key": "ckey1", 00:21:30.521 "method": "bdev_nvme_attach_controller", 00:21:30.521 "req_id": 1 00:21:30.521 } 00:21:30.521 Got JSON-RPC error response 00:21:30.521 response: 00:21:30.521 { 00:21:30.521 "code": -5, 00:21:30.521 "message": "Input/output error" 00:21:30.521 } 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3583418 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3583418 ']' 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3583418 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583418 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583418' 00:21:30.521 killing process with pid 3583418 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3583418 00:21:30.521 13:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3583418 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3605078 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3605078 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3605078 ']' 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.779 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3605078 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3605078 ']' 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.036 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.306 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.306 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:31.306 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:31.306 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.306 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.599 13:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:32.164 00:21:32.164 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.164 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.164 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.421 { 00:21:32.421 "cntlid": 1, 00:21:32.421 "qid": 0, 00:21:32.421 "state": "enabled", 00:21:32.421 "thread": "nvmf_tgt_poll_group_000", 00:21:32.421 "listen_address": { 00:21:32.421 "trtype": "TCP", 00:21:32.421 "adrfam": "IPv4", 00:21:32.421 "traddr": "10.0.0.2", 00:21:32.421 "trsvcid": "4420" 00:21:32.421 }, 00:21:32.421 "peer_address": { 00:21:32.421 "trtype": "TCP", 00:21:32.421 "adrfam": "IPv4", 00:21:32.421 "traddr": "10.0.0.1", 00:21:32.421 "trsvcid": "48654" 00:21:32.421 }, 00:21:32.421 "auth": { 00:21:32.421 "state": "completed", 00:21:32.421 "digest": "sha512", 00:21:32.421 "dhgroup": "ffdhe8192" 00:21:32.421 } 00:21:32.421 } 00:21:32.421 ]' 00:21:32.421 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.678 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.678 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.678 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.678 13:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.678 13:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.678 13:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.678 13:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.935 13:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:N2M0YjZjOTQxMGNjZmMyYWU1OGUyYjAxNmE0MjQzYTYyZmU3NzVmMjQ2YTYxYjI4NmJhZmRjZTE4MWYxNGY5NzlM2Pw=: 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:33.868 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.125 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.382 request: 00:21:34.382 { 00:21:34.382 "name": "nvme0", 00:21:34.382 "trtype": "tcp", 00:21:34.382 "traddr": "10.0.0.2", 00:21:34.382 "adrfam": "ipv4", 00:21:34.382 "trsvcid": "4420", 00:21:34.382 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:34.382 "prchk_reftag": false, 00:21:34.382 "prchk_guard": false, 00:21:34.382 "hdgst": false, 00:21:34.382 "ddgst": false, 00:21:34.382 "dhchap_key": "key3", 00:21:34.382 "method": "bdev_nvme_attach_controller", 00:21:34.382 "req_id": 1 00:21:34.382 } 00:21:34.382 Got JSON-RPC error response 00:21:34.382 response: 00:21:34.382 { 00:21:34.382 "code": -5, 00:21:34.382 "message": "Input/output error" 00:21:34.382 } 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.382 13:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.654 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.912 request: 00:21:34.912 { 00:21:34.912 "name": "nvme0", 00:21:34.912 "trtype": "tcp", 00:21:34.912 "traddr": "10.0.0.2", 00:21:34.912 "adrfam": "ipv4", 00:21:34.912 "trsvcid": "4420", 00:21:34.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:34.912 "prchk_reftag": false, 00:21:34.912 "prchk_guard": false, 00:21:34.912 "hdgst": false, 00:21:34.912 "ddgst": false, 00:21:34.912 "dhchap_key": "key3", 00:21:34.912 "method": "bdev_nvme_attach_controller", 00:21:34.912 "req_id": 1 00:21:34.912 } 00:21:34.912 Got JSON-RPC error response 00:21:34.912 response: 00:21:34.912 { 00:21:34.912 "code": -5, 00:21:34.912 "message": "Input/output error" 00:21:34.912 } 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.912 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.170 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.428 request: 00:21:35.428 { 00:21:35.428 "name": "nvme0", 00:21:35.428 "trtype": "tcp", 00:21:35.428 "traddr": "10.0.0.2", 00:21:35.428 "adrfam": "ipv4", 00:21:35.428 "trsvcid": "4420", 00:21:35.428 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:35.428 "prchk_reftag": false, 00:21:35.428 "prchk_guard": false, 00:21:35.428 "hdgst": false, 00:21:35.428 "ddgst": false, 00:21:35.428 "dhchap_key": "key0", 00:21:35.428 "dhchap_ctrlr_key": "key1", 00:21:35.428 "method": "bdev_nvme_attach_controller", 00:21:35.428 "req_id": 1 00:21:35.428 } 00:21:35.428 Got JSON-RPC error response 00:21:35.428 response: 00:21:35.428 { 00:21:35.428 "code": -5, 00:21:35.428 "message": "Input/output error" 00:21:35.428 } 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.689 13:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.947 00:21:35.947 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:35.947 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:35.947 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.205 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.205 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.205 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3583441 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3583441 ']' 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3583441 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583441 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583441' 00:21:36.462 killing process with pid 3583441 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3583441 00:21:36.462 13:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3583441 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.719 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.977 rmmod nvme_tcp 00:21:36.977 rmmod nvme_fabrics 00:21:36.977 rmmod nvme_keyring 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3605078 ']' 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3605078 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3605078 ']' 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3605078 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3605078 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3605078' 00:21:36.977 killing process with pid 3605078 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3605078 00:21:36.977 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3605078 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.235 13:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.141 13:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.141 13:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BC3 /tmp/spdk.key-sha256.BIz /tmp/spdk.key-sha384.AdW /tmp/spdk.key-sha512.2Pv /tmp/spdk.key-sha512.0V5 /tmp/spdk.key-sha384.N6b /tmp/spdk.key-sha256.Ezu '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:39.141 00:21:39.141 real 3m0.630s 00:21:39.141 user 7m2.024s 00:21:39.141 sys 0m25.047s 00:21:39.141 13:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.141 13:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.141 ************************************ 00:21:39.141 END TEST nvmf_auth_target 00:21:39.141 ************************************ 00:21:39.141 13:29:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:39.141 13:29:36 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:39.141 13:29:36 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.141 13:29:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:39.141 13:29:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.141 13:29:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.141 ************************************ 00:21:39.141 START TEST nvmf_bdevio_no_huge 00:21:39.141 ************************************ 00:21:39.141 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.399 * Looking for test storage... 00:21:39.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.399 13:29:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:41.304 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:41.304 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:41.304 Found net devices under 0000:09:00.0: cvl_0_0 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:41.304 Found net devices under 0000:09:00.1: cvl_0_1 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.304 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:41.305 00:21:41.305 --- 10.0.0.2 ping statistics --- 00:21:41.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.305 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:41.305 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:41.563 00:21:41.563 --- 10.0.0.1 ping statistics --- 00:21:41.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.563 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.563 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3607769 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3607769 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3607769 ']' 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.564 13:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.564 [2024-07-12 13:29:38.855082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:21:41.564 [2024-07-12 13:29:38.855177] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:41.564 [2024-07-12 13:29:38.905988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.564 [2024-07-12 13:29:38.924282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.564 [2024-07-12 13:29:39.007568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.564 [2024-07-12 13:29:39.007634] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.564 [2024-07-12 13:29:39.007648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.564 [2024-07-12 13:29:39.007659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.564 [2024-07-12 13:29:39.007668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.564 [2024-07-12 13:29:39.007789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.564 [2024-07-12 13:29:39.007852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:41.564 [2024-07-12 13:29:39.007918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:41.564 [2024-07-12 13:29:39.007920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 [2024-07-12 13:29:39.128025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 Malloc0 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.822 [2024-07-12 13:29:39.165780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.822 { 00:21:41.822 "params": { 00:21:41.822 "name": "Nvme$subsystem", 00:21:41.822 "trtype": "$TEST_TRANSPORT", 00:21:41.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.822 "adrfam": "ipv4", 00:21:41.822 "trsvcid": "$NVMF_PORT", 00:21:41.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.822 "hdgst": ${hdgst:-false}, 00:21:41.822 "ddgst": ${ddgst:-false} 00:21:41.822 }, 00:21:41.822 "method": "bdev_nvme_attach_controller" 00:21:41.822 } 00:21:41.822 EOF 00:21:41.822 )") 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:41.822 13:29:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.822 "params": { 00:21:41.822 "name": "Nvme1", 00:21:41.822 "trtype": "tcp", 00:21:41.822 "traddr": "10.0.0.2", 00:21:41.822 "adrfam": "ipv4", 00:21:41.822 "trsvcid": "4420", 00:21:41.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.822 "hdgst": false, 00:21:41.822 "ddgst": false 00:21:41.822 }, 00:21:41.822 "method": "bdev_nvme_attach_controller" 00:21:41.822 }' 00:21:41.822 [2024-07-12 13:29:39.213410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:21:41.822 [2024-07-12 13:29:39.213497] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3607803 ] 00:21:41.822 [2024-07-12 13:29:39.255182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.822 [2024-07-12 13:29:39.274976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:42.079 [2024-07-12 13:29:39.362111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.079 [2024-07-12 13:29:39.362163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.079 [2024-07-12 13:29:39.362166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.337 I/O targets: 00:21:42.337 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:42.337 00:21:42.337 00:21:42.337 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.337 http://cunit.sourceforge.net/ 00:21:42.337 00:21:42.337 00:21:42.337 Suite: bdevio tests on: Nvme1n1 00:21:42.337 Test: blockdev write read block ...passed 00:21:42.337 Test: blockdev write zeroes read block ...passed 00:21:42.337 Test: blockdev write zeroes read no split ...passed 00:21:42.337 Test: blockdev write zeroes read split ...passed 00:21:42.595 Test: blockdev write zeroes read split partial ...passed 00:21:42.595 Test: blockdev reset ...[2024-07-12 13:29:39.846788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.595 [2024-07-12 13:29:39.846891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2463330 (9): Bad file descriptor 00:21:42.595 [2024-07-12 13:29:39.950862] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.595 passed 00:21:42.595 Test: blockdev write read 8 blocks ...passed 00:21:42.595 Test: blockdev write read size > 128k ...passed 00:21:42.595 Test: blockdev write read invalid size ...passed 00:21:42.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:42.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:42.595 Test: blockdev write read max offset ...passed 00:21:42.853 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:42.853 Test: blockdev writev readv 8 blocks ...passed 00:21:42.853 Test: blockdev writev readv 30 x 1block ...passed 00:21:42.853 Test: blockdev writev readv block ...passed 00:21:42.853 Test: blockdev writev readv size > 128k ...passed 00:21:42.853 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:42.853 Test: blockdev comparev and writev ...[2024-07-12 13:29:40.167636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.167690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.167715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.167732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.168124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.168148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.168176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.168193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.168550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.168595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.168612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.168979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.169002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.169024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.853 [2024-07-12 13:29:40.169040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:42.853 passed 00:21:42.853 Test: blockdev nvme passthru rw ...passed 00:21:42.853 Test: blockdev nvme passthru vendor specific ...[2024-07-12 13:29:40.252639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.853 [2024-07-12 13:29:40.252667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.252863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.853 [2024-07-12 13:29:40.252885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.253077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.853 [2024-07-12 13:29:40.253100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:42.853 [2024-07-12 13:29:40.253287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.853 [2024-07-12 13:29:40.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:42.853 passed 00:21:42.853 Test: blockdev nvme admin passthru ...passed 00:21:42.853 Test: blockdev copy ...passed 00:21:42.853 00:21:42.853 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.853 suites 1 1 n/a 0 0 00:21:42.853 tests 23 23 23 0 0 00:21:42.853 asserts 152 152 152 0 n/a 00:21:42.853 00:21:42.853 Elapsed time = 1.351 seconds 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.420 rmmod nvme_tcp 00:21:43.420 rmmod nvme_fabrics 00:21:43.420 rmmod nvme_keyring 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3607769 ']' 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3607769 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3607769 ']' 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3607769 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3607769 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3607769' 00:21:43.420 killing process with pid 3607769 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3607769 00:21:43.420 13:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3607769 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.681 13:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.214 13:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.214 00:21:46.214 real 0m6.520s 00:21:46.214 user 0m11.199s 00:21:46.214 sys 0m2.524s 00:21:46.214 13:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.214 13:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.214 ************************************ 00:21:46.214 END TEST nvmf_bdevio_no_huge 00:21:46.214 ************************************ 00:21:46.214 13:29:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.214 13:29:43 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:46.214 13:29:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.214 13:29:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.214 13:29:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.214 ************************************ 00:21:46.214 START TEST nvmf_tls 00:21:46.214 ************************************ 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:46.214 * Looking for test storage... 00:21:46.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:46.214 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.215 13:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:48.167 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:48.167 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:48.167 Found net devices under 0000:09:00.0: cvl_0_0 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:48.167 Found net devices under 0000:09:00.1: cvl_0_1 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:48.167 00:21:48.167 --- 10.0.0.2 ping statistics --- 00:21:48.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.167 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:21:48.167 00:21:48.167 --- 10.0.0.1 ping statistics --- 00:21:48.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.167 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3609993 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3609993 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3609993 ']' 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.167 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.168 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.168 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 [2024-07-12 13:29:45.455257] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:21:48.168 [2024-07-12 13:29:45.455374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.168 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.168 [2024-07-12 13:29:45.494075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:48.168 [2024-07-12 13:29:45.520899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.168 [2024-07-12 13:29:45.605129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.168 [2024-07-12 13:29:45.605185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.168 [2024-07-12 13:29:45.605198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.168 [2024-07-12 13:29:45.605209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.168 [2024-07-12 13:29:45.605234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.168 [2024-07-12 13:29:45.605261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:48.425 13:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:48.683 true 00:21:48.683 13:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.683 13:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:48.941 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:48.941 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:48.941 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:49.199 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.199 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:49.457 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:49.457 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:49.457 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:49.714 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.714 13:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:49.971 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:49.971 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:49.971 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.971 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:50.228 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:50.228 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:50.228 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:50.228 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.228 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:50.486 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:50.486 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:50.486 13:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:50.743 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.743 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:51.000 13:29:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.x4QY0gb5bw 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.O54LtBrqV4 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.x4QY0gb5bw 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.O54LtBrqV4 00:21:51.257 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:51.513 13:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:51.770 13:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.x4QY0gb5bw 00:21:51.770 13:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x4QY0gb5bw 00:21:51.770 13:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:52.027 [2024-07-12 13:29:49.478152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.027 13:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:52.592 13:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:52.592 [2024-07-12 13:29:50.063787] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.592 [2024-07-12 13:29:50.064018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.850 13:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:53.108 malloc0 00:21:53.108 13:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:53.366 13:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x4QY0gb5bw 00:21:53.623 [2024-07-12 13:29:50.867758] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.623 13:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.x4QY0gb5bw 00:21:53.623 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.580 Initializing NVMe Controllers 00:22:03.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.580 Initialization complete. Launching workers. 00:22:03.580 ======================================================== 00:22:03.580 Latency(us) 00:22:03.580 Device Information : IOPS MiB/s Average min max 00:22:03.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8683.88 33.92 7372.06 1145.92 9226.11 00:22:03.580 ======================================================== 00:22:03.580 Total : 8683.88 33.92 7372.06 1145.92 9226.11 00:22:03.580 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4QY0gb5bw 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x4QY0gb5bw' 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3611821 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3611821 /var/tmp/bdevperf.sock 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3611821 ']' 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.580 13:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.580 [2024-07-12 13:30:01.034524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:03.580 [2024-07-12 13:30:01.034632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611821 ] 00:22:03.837 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.837 [2024-07-12 13:30:01.068384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.837 [2024-07-12 13:30:01.113111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.837 [2024-07-12 13:30:01.217531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.094 13:30:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.094 13:30:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:04.094 13:30:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x4QY0gb5bw 00:22:04.095 [2024-07-12 13:30:01.565878] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.095 [2024-07-12 13:30:01.565998] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.352 TLSTESTn1 00:22:04.352 13:30:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.352 Running I/O for 10 seconds... 00:22:16.568 00:22:16.568 Latency(us) 00:22:16.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:16.568 Verification LBA range: start 0x0 length 0x2000 00:22:16.568 TLSTESTn1 : 10.06 1527.46 5.97 0.00 0.00 83557.40 9806.13 68739.98 00:22:16.568 =================================================================================================================== 00:22:16.568 Total : 1527.46 5.97 0.00 0.00 83557.40 9806.13 68739.98 00:22:16.568 0 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3611821 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3611821 ']' 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3611821 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3611821 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3611821' 00:22:16.568 killing process with pid 3611821 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3611821 00:22:16.568 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.568 00:22:16.568 Latency(us) 00:22:16.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.568 =================================================================================================================== 00:22:16.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.568 [2024-07-12 13:30:11.890478] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.568 13:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3611821 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O54LtBrqV4 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O54LtBrqV4 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O54LtBrqV4 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O54LtBrqV4' 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3613696 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.568 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3613696 /var/tmp/bdevperf.sock 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3613696 ']' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.569 [2024-07-12 13:30:12.171386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:16.569 [2024-07-12 13:30:12.171466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613696 ] 00:22:16.569 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.569 [2024-07-12 13:30:12.203435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.569 [2024-07-12 13:30:12.231441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.569 [2024-07-12 13:30:12.317077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O54LtBrqV4 00:22:16.569 [2024-07-12 13:30:12.684610] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.569 [2024-07-12 13:30:12.684737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.569 [2024-07-12 13:30:12.695743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.569 [2024-07-12 13:30:12.696592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ed8d0 (107): Transport endpoint is not connected 00:22:16.569 [2024-07-12 13:30:12.697582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ed8d0 (9): Bad file descriptor 00:22:16.569 [2024-07-12 13:30:12.698581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.569 [2024-07-12 13:30:12.698616] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:16.569 [2024-07-12 13:30:12.698632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.569 request: 00:22:16.569 { 00:22:16.569 "name": "TLSTEST", 00:22:16.569 "trtype": "tcp", 00:22:16.569 "traddr": "10.0.0.2", 00:22:16.569 "adrfam": "ipv4", 00:22:16.569 "trsvcid": "4420", 00:22:16.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.569 "prchk_reftag": false, 00:22:16.569 "prchk_guard": false, 00:22:16.569 "hdgst": false, 00:22:16.569 "ddgst": false, 00:22:16.569 "psk": "/tmp/tmp.O54LtBrqV4", 00:22:16.569 "method": "bdev_nvme_attach_controller", 00:22:16.569 "req_id": 1 00:22:16.569 } 00:22:16.569 Got JSON-RPC error response 00:22:16.569 response: 00:22:16.569 { 00:22:16.569 "code": -5, 00:22:16.569 "message": "Input/output error" 00:22:16.569 } 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3613696 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3613696 ']' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3613696 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3613696 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3613696' 00:22:16.569 killing process with pid 3613696 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3613696 00:22:16.569 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.569 00:22:16.569 Latency(us) 00:22:16.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.569 =================================================================================================================== 00:22:16.569 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.569 [2024-07-12 13:30:12.749089] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3613696 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.x4QY0gb5bw 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.x4QY0gb5bw 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.x4QY0gb5bw 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x4QY0gb5bw' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3613834 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3613834 /var/tmp/bdevperf.sock 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3613834 ']' 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.569 13:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.569 [2024-07-12 13:30:13.012091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:16.569 [2024-07-12 13:30:13.012181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613834 ] 00:22:16.570 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.570 [2024-07-12 13:30:13.043015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.570 [2024-07-12 13:30:13.069301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.570 [2024-07-12 13:30:13.149639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.x4QY0gb5bw 00:22:16.570 [2024-07-12 13:30:13.529382] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.570 [2024-07-12 13:30:13.529505] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.570 [2024-07-12 13:30:13.537780] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:16.570 [2024-07-12 13:30:13.537814] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:16.570 [2024-07-12 13:30:13.537867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.570 [2024-07-12 13:30:13.538369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a08d0 (107): Transport endpoint is not connected 00:22:16.570 [2024-07-12 13:30:13.539360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a08d0 (9): Bad file descriptor 00:22:16.570 [2024-07-12 13:30:13.540359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.570 [2024-07-12 13:30:13.540380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:16.570 [2024-07-12 13:30:13.540413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.570 request: 00:22:16.570 { 00:22:16.570 "name": "TLSTEST", 00:22:16.570 "trtype": "tcp", 00:22:16.570 "traddr": "10.0.0.2", 00:22:16.570 "adrfam": "ipv4", 00:22:16.570 "trsvcid": "4420", 00:22:16.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.570 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.570 "prchk_reftag": false, 00:22:16.570 "prchk_guard": false, 00:22:16.570 "hdgst": false, 00:22:16.570 "ddgst": false, 00:22:16.570 "psk": "/tmp/tmp.x4QY0gb5bw", 00:22:16.570 "method": "bdev_nvme_attach_controller", 00:22:16.570 "req_id": 1 00:22:16.570 } 00:22:16.570 Got JSON-RPC error response 00:22:16.570 response: 00:22:16.570 { 00:22:16.570 "code": -5, 00:22:16.570 "message": "Input/output error" 00:22:16.570 } 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3613834 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3613834 ']' 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3613834 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3613834 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3613834' 00:22:16.570 killing process with pid 3613834 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3613834 00:22:16.570 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.570 00:22:16.570 Latency(us) 00:22:16.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.570 =================================================================================================================== 00:22:16.570 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.570 [2024-07-12 13:30:13.590089] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3613834 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4QY0gb5bw 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4QY0gb5bw 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4QY0gb5bw 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x4QY0gb5bw' 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3613968 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3613968 /var/tmp/bdevperf.sock 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3613968 ']' 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.570 13:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.570 [2024-07-12 13:30:13.834336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:16.570 [2024-07-12 13:30:13.834417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613968 ] 00:22:16.570 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.570 [2024-07-12 13:30:13.865604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.570 [2024-07-12 13:30:13.893047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.570 [2024-07-12 13:30:13.978237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.829 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.829 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.829 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x4QY0gb5bw 00:22:16.829 [2024-07-12 13:30:14.301096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.829 [2024-07-12 13:30:14.301226] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.086 [2024-07-12 13:30:14.309757] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:17.086 [2024-07-12 13:30:14.309802] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:17.086 [2024-07-12 13:30:14.309854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.086 [2024-07-12 13:30:14.310458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95e8d0 (107): Transport endpoint is not connected 00:22:17.086 [2024-07-12 13:30:14.311448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95e8d0 (9): Bad file descriptor 00:22:17.086 [2024-07-12 13:30:14.312446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:17.086 [2024-07-12 13:30:14.312466] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:17.086 [2024-07-12 13:30:14.312483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:17.086 request: 00:22:17.086 { 00:22:17.086 "name": "TLSTEST", 00:22:17.086 "trtype": "tcp", 00:22:17.086 "traddr": "10.0.0.2", 00:22:17.086 "adrfam": "ipv4", 00:22:17.086 "trsvcid": "4420", 00:22:17.086 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.086 "prchk_reftag": false, 00:22:17.086 "prchk_guard": false, 00:22:17.086 "hdgst": false, 00:22:17.086 "ddgst": false, 00:22:17.086 "psk": "/tmp/tmp.x4QY0gb5bw", 00:22:17.086 "method": "bdev_nvme_attach_controller", 00:22:17.086 "req_id": 1 00:22:17.086 } 00:22:17.086 Got JSON-RPC error response 00:22:17.086 response: 00:22:17.086 { 00:22:17.086 "code": -5, 00:22:17.086 "message": "Input/output error" 00:22:17.086 } 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3613968 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3613968 ']' 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3613968 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3613968 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3613968' 00:22:17.086 killing process with pid 3613968 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3613968 00:22:17.086 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.086 00:22:17.086 Latency(us) 00:22:17.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.086 =================================================================================================================== 00:22:17.086 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.086 [2024-07-12 13:30:14.362282] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:17.086 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3613968 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3613990 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3613990 /var/tmp/bdevperf.sock 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3613990 ']' 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.344 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.344 [2024-07-12 13:30:14.632112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:17.344 [2024-07-12 13:30:14.632190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613990 ] 00:22:17.344 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.344 [2024-07-12 13:30:14.663040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.344 [2024-07-12 13:30:14.689505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.344 [2024-07-12 13:30:14.771767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.602 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.602 13:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:17.602 13:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:17.861 [2024-07-12 13:30:15.111875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.861 [2024-07-12 13:30:15.114199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ddde0 (9): Bad file descriptor 00:22:17.861 [2024-07-12 13:30:15.115195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.861 [2024-07-12 13:30:15.115214] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:17.861 [2024-07-12 13:30:15.115245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.861 request: 00:22:17.861 { 00:22:17.861 "name": "TLSTEST", 00:22:17.861 "trtype": "tcp", 00:22:17.861 "traddr": "10.0.0.2", 00:22:17.861 "adrfam": "ipv4", 00:22:17.861 "trsvcid": "4420", 00:22:17.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.861 "prchk_reftag": false, 00:22:17.861 "prchk_guard": false, 00:22:17.861 "hdgst": false, 00:22:17.861 "ddgst": false, 00:22:17.861 "method": "bdev_nvme_attach_controller", 00:22:17.861 "req_id": 1 00:22:17.861 } 00:22:17.861 Got JSON-RPC error response 00:22:17.861 response: 00:22:17.861 { 00:22:17.861 "code": -5, 00:22:17.861 "message": "Input/output error" 00:22:17.861 } 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3613990 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3613990 ']' 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3613990 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3613990 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3613990' 00:22:17.861 killing process with pid 3613990 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3613990 00:22:17.861 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.861 00:22:17.861 Latency(us) 00:22:17.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.861 =================================================================================================================== 00:22:17.861 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.861 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3613990 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3609993 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3609993 ']' 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3609993 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3609993 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3609993' 00:22:18.119 killing process with pid 3609993 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3609993 00:22:18.119 [2024-07-12 13:30:15.412898] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.119 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3609993 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IUaNKgUjwc 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IUaNKgUjwc 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3614141 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.376 13:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3614141 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3614141 ']' 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.377 13:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.377 [2024-07-12 13:30:15.763887] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:18.377 [2024-07-12 13:30:15.763957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.377 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.377 [2024-07-12 13:30:15.798593] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.377 [2024-07-12 13:30:15.826468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.634 [2024-07-12 13:30:15.913711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.634 [2024-07-12 13:30:15.913762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.634 [2024-07-12 13:30:15.913792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.634 [2024-07-12 13:30:15.913804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.634 [2024-07-12 13:30:15.913815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.634 [2024-07-12 13:30:15.913841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IUaNKgUjwc 00:22:18.634 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.891 [2024-07-12 13:30:16.308673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.891 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.148 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.405 [2024-07-12 13:30:16.838009] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.405 [2024-07-12 13:30:16.838215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.405 13:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.970 malloc0 00:22:19.970 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.970 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:20.227 [2024-07-12 13:30:17.635855] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IUaNKgUjwc 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IUaNKgUjwc' 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3614424 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3614424 /var/tmp/bdevperf.sock 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3614424 ']' 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.227 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.227 [2024-07-12 13:30:17.695371] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:20.227 [2024-07-12 13:30:17.695465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614424 ] 00:22:20.485 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.485 [2024-07-12 13:30:17.728858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:20.485 [2024-07-12 13:30:17.756063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.485 [2024-07-12 13:30:17.841503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.485 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.485 13:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.485 13:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:20.742 [2024-07-12 13:30:18.191708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.742 [2024-07-12 13:30:18.191816] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.999 TLSTESTn1 00:22:20.999 13:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:20.999 Running I/O for 10 seconds... 00:22:33.187 00:22:33.187 Latency(us) 00:22:33.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.187 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.187 Verification LBA range: start 0x0 length 0x2000 00:22:33.187 TLSTESTn1 : 10.04 2848.09 11.13 0.00 0.00 44833.38 7330.32 64856.37 00:22:33.187 =================================================================================================================== 00:22:33.187 Total : 2848.09 11.13 0.00 0.00 44833.38 7330.32 64856.37 00:22:33.187 0 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3614424 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3614424 ']' 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3614424 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3614424 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3614424' 00:22:33.187 killing process with pid 3614424 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3614424 00:22:33.187 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.187 00:22:33.187 Latency(us) 00:22:33.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.187 =================================================================================================================== 00:22:33.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.187 [2024-07-12 13:30:28.487428] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3614424 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IUaNKgUjwc 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IUaNKgUjwc 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IUaNKgUjwc 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IUaNKgUjwc 00:22:33.187 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IUaNKgUjwc' 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3615737 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3615737 /var/tmp/bdevperf.sock 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3615737 ']' 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.188 13:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.188 [2024-07-12 13:30:28.765809] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:33.188 [2024-07-12 13:30:28.765889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615737 ] 00:22:33.188 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.188 [2024-07-12 13:30:28.797034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.188 [2024-07-12 13:30:28.823531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.188 [2024-07-12 13:30:28.904335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:33.188 [2024-07-12 13:30:29.232963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.188 [2024-07-12 13:30:29.233053] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:33.188 [2024-07-12 13:30:29.233069] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IUaNKgUjwc 00:22:33.188 request: 00:22:33.188 { 00:22:33.188 "name": "TLSTEST", 00:22:33.188 "trtype": "tcp", 00:22:33.188 "traddr": "10.0.0.2", 00:22:33.188 "adrfam": "ipv4", 00:22:33.188 "trsvcid": "4420", 00:22:33.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.188 "prchk_reftag": false, 00:22:33.188 "prchk_guard": false, 00:22:33.188 "hdgst": false, 00:22:33.188 "ddgst": false, 00:22:33.188 "psk": "/tmp/tmp.IUaNKgUjwc", 00:22:33.188 "method": "bdev_nvme_attach_controller", 00:22:33.188 "req_id": 1 00:22:33.188 } 00:22:33.188 Got JSON-RPC error response 00:22:33.188 response: 00:22:33.188 { 00:22:33.188 "code": -1, 00:22:33.188 "message": "Operation not permitted" 00:22:33.188 } 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3615737 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3615737 ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3615737 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3615737 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3615737' 00:22:33.188 killing process with pid 3615737 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3615737 00:22:33.188 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.188 00:22:33.188 Latency(us) 00:22:33.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.188 =================================================================================================================== 00:22:33.188 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3615737 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3614141 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3614141 ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3614141 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3614141 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3614141' 00:22:33.188 killing process with pid 3614141 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3614141 00:22:33.188 [2024-07-12 13:30:29.533554] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3614141 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3615881 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3615881 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3615881 ']' 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.188 13:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.188 [2024-07-12 13:30:29.816010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:33.188 [2024-07-12 13:30:29.816101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.188 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.188 [2024-07-12 13:30:29.854277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.188 [2024-07-12 13:30:29.879950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.188 [2024-07-12 13:30:29.963071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.189 [2024-07-12 13:30:29.963136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.189 [2024-07-12 13:30:29.963149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.189 [2024-07-12 13:30:29.963160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.189 [2024-07-12 13:30:29.963190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.189 [2024-07-12 13:30:29.963216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IUaNKgUjwc 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:33.189 [2024-07-12 13:30:30.314162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:33.189 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:33.446 [2024-07-12 13:30:30.807496] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.446 [2024-07-12 13:30:30.807729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.446 13:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:33.703 malloc0 00:22:33.703 13:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.960 13:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:34.217 [2024-07-12 13:30:31.531575] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:34.217 [2024-07-12 13:30:31.531611] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:34.217 [2024-07-12 13:30:31.531657] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:34.217 request: 00:22:34.217 { 00:22:34.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.217 "host": "nqn.2016-06.io.spdk:host1", 00:22:34.217 "psk": "/tmp/tmp.IUaNKgUjwc", 00:22:34.217 "method": "nvmf_subsystem_add_host", 00:22:34.217 "req_id": 1 00:22:34.217 } 00:22:34.217 Got JSON-RPC error response 00:22:34.217 response: 00:22:34.217 { 00:22:34.217 "code": -32603, 00:22:34.217 "message": "Internal error" 00:22:34.217 } 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3615881 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3615881 ']' 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3615881 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3615881 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3615881' 00:22:34.217 killing process with pid 3615881 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3615881 00:22:34.217 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3615881 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IUaNKgUjwc 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3616067 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3616067 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3616067 ']' 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.475 13:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.475 [2024-07-12 13:30:31.883005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:34.475 [2024-07-12 13:30:31.883088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.475 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.475 [2024-07-12 13:30:31.921836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:34.732 [2024-07-12 13:30:31.950439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.732 [2024-07-12 13:30:32.033960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.732 [2024-07-12 13:30:32.034011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.732 [2024-07-12 13:30:32.034038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.732 [2024-07-12 13:30:32.034050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.732 [2024-07-12 13:30:32.034060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.732 [2024-07-12 13:30:32.034085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IUaNKgUjwc 00:22:34.732 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:34.989 [2024-07-12 13:30:32.432902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.989 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.245 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.502 [2024-07-12 13:30:32.910132] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.502 [2024-07-12 13:30:32.910373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.502 13:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.760 malloc0 00:22:35.760 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:36.324 [2024-07-12 13:30:33.726938] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3616342 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3616342 /var/tmp/bdevperf.sock 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3616342 ']' 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.324 13:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.324 [2024-07-12 13:30:33.784193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:36.324 [2024-07-12 13:30:33.784262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616342 ] 00:22:36.582 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.582 [2024-07-12 13:30:33.815866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:36.582 [2024-07-12 13:30:33.843222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.582 [2024-07-12 13:30:33.928124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.582 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.582 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:36.582 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:36.840 [2024-07-12 13:30:34.264265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.840 [2024-07-12 13:30:34.264429] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.126 TLSTESTn1 00:22:37.126 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:37.383 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:37.383 "subsystems": [ 00:22:37.383 { 00:22:37.383 "subsystem": "keyring", 00:22:37.383 "config": [] 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "subsystem": "iobuf", 00:22:37.383 "config": [ 00:22:37.383 { 00:22:37.383 "method": "iobuf_set_options", 00:22:37.383 "params": { 00:22:37.383 "small_pool_count": 8192, 00:22:37.383 "large_pool_count": 1024, 00:22:37.383 "small_bufsize": 8192, 00:22:37.383 "large_bufsize": 135168 00:22:37.383 } 00:22:37.383 } 00:22:37.383 ] 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "subsystem": "sock", 00:22:37.383 "config": [ 00:22:37.383 { 00:22:37.383 "method": "sock_set_default_impl", 00:22:37.383 "params": { 00:22:37.383 "impl_name": "posix" 00:22:37.383 } 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "method": "sock_impl_set_options", 00:22:37.383 "params": { 00:22:37.383 "impl_name": "ssl", 00:22:37.383 "recv_buf_size": 4096, 00:22:37.383 "send_buf_size": 4096, 00:22:37.383 "enable_recv_pipe": true, 00:22:37.383 "enable_quickack": false, 00:22:37.383 "enable_placement_id": 0, 00:22:37.383 "enable_zerocopy_send_server": true, 00:22:37.383 "enable_zerocopy_send_client": false, 00:22:37.383 "zerocopy_threshold": 0, 00:22:37.383 "tls_version": 0, 00:22:37.383 "enable_ktls": false 00:22:37.383 } 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "method": "sock_impl_set_options", 00:22:37.383 "params": { 00:22:37.383 "impl_name": "posix", 00:22:37.383 "recv_buf_size": 2097152, 00:22:37.383 "send_buf_size": 2097152, 00:22:37.383 "enable_recv_pipe": true, 00:22:37.383 "enable_quickack": false, 00:22:37.383 "enable_placement_id": 0, 00:22:37.383 "enable_zerocopy_send_server": true, 00:22:37.383 "enable_zerocopy_send_client": false, 00:22:37.383 "zerocopy_threshold": 0, 00:22:37.383 "tls_version": 0, 00:22:37.383 "enable_ktls": false 00:22:37.383 } 00:22:37.383 } 00:22:37.383 ] 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "subsystem": "vmd", 00:22:37.383 "config": [] 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "subsystem": "accel", 00:22:37.383 "config": [ 00:22:37.383 { 00:22:37.383 "method": "accel_set_options", 00:22:37.383 "params": { 00:22:37.383 "small_cache_size": 128, 00:22:37.383 "large_cache_size": 16, 00:22:37.383 "task_count": 2048, 00:22:37.383 "sequence_count": 2048, 00:22:37.383 "buf_count": 2048 00:22:37.383 } 00:22:37.383 } 00:22:37.383 ] 00:22:37.383 }, 00:22:37.383 { 00:22:37.383 "subsystem": "bdev", 00:22:37.383 "config": [ 00:22:37.383 { 00:22:37.383 "method": "bdev_set_options", 00:22:37.383 "params": { 00:22:37.384 "bdev_io_pool_size": 65535, 00:22:37.384 "bdev_io_cache_size": 256, 00:22:37.384 "bdev_auto_examine": true, 00:22:37.384 "iobuf_small_cache_size": 128, 00:22:37.384 "iobuf_large_cache_size": 16 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_raid_set_options", 00:22:37.384 "params": { 00:22:37.384 "process_window_size_kb": 1024 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_iscsi_set_options", 00:22:37.384 "params": { 00:22:37.384 "timeout_sec": 30 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_nvme_set_options", 00:22:37.384 "params": { 00:22:37.384 "action_on_timeout": "none", 00:22:37.384 "timeout_us": 0, 00:22:37.384 "timeout_admin_us": 0, 00:22:37.384 "keep_alive_timeout_ms": 10000, 00:22:37.384 "arbitration_burst": 0, 00:22:37.384 "low_priority_weight": 0, 00:22:37.384 "medium_priority_weight": 0, 00:22:37.384 "high_priority_weight": 0, 00:22:37.384 "nvme_adminq_poll_period_us": 10000, 00:22:37.384 "nvme_ioq_poll_period_us": 0, 00:22:37.384 "io_queue_requests": 0, 00:22:37.384 "delay_cmd_submit": true, 00:22:37.384 "transport_retry_count": 4, 00:22:37.384 "bdev_retry_count": 3, 00:22:37.384 "transport_ack_timeout": 0, 00:22:37.384 "ctrlr_loss_timeout_sec": 0, 00:22:37.384 "reconnect_delay_sec": 0, 00:22:37.384 "fast_io_fail_timeout_sec": 0, 00:22:37.384 "disable_auto_failback": false, 00:22:37.384 "generate_uuids": false, 00:22:37.384 "transport_tos": 0, 00:22:37.384 "nvme_error_stat": false, 00:22:37.384 "rdma_srq_size": 0, 00:22:37.384 "io_path_stat": false, 00:22:37.384 "allow_accel_sequence": false, 00:22:37.384 "rdma_max_cq_size": 0, 00:22:37.384 "rdma_cm_event_timeout_ms": 0, 00:22:37.384 "dhchap_digests": [ 00:22:37.384 "sha256", 00:22:37.384 "sha384", 00:22:37.384 "sha512" 00:22:37.384 ], 00:22:37.384 "dhchap_dhgroups": [ 00:22:37.384 "null", 00:22:37.384 "ffdhe2048", 00:22:37.384 "ffdhe3072", 00:22:37.384 "ffdhe4096", 00:22:37.384 "ffdhe6144", 00:22:37.384 "ffdhe8192" 00:22:37.384 ] 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_nvme_set_hotplug", 00:22:37.384 "params": { 00:22:37.384 "period_us": 100000, 00:22:37.384 "enable": false 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_malloc_create", 00:22:37.384 "params": { 00:22:37.384 "name": "malloc0", 00:22:37.384 "num_blocks": 8192, 00:22:37.384 "block_size": 4096, 00:22:37.384 "physical_block_size": 4096, 00:22:37.384 "uuid": "14263955-78b0-4539-950a-cffa95c052d3", 00:22:37.384 "optimal_io_boundary": 0 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "bdev_wait_for_examine" 00:22:37.384 } 00:22:37.384 ] 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "subsystem": "nbd", 00:22:37.384 "config": [] 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "subsystem": "scheduler", 00:22:37.384 "config": [ 00:22:37.384 { 00:22:37.384 "method": "framework_set_scheduler", 00:22:37.384 "params": { 00:22:37.384 "name": "static" 00:22:37.384 } 00:22:37.384 } 00:22:37.384 ] 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "subsystem": "nvmf", 00:22:37.384 "config": [ 00:22:37.384 { 00:22:37.384 "method": "nvmf_set_config", 00:22:37.384 "params": { 00:22:37.384 "discovery_filter": "match_any", 00:22:37.384 "admin_cmd_passthru": { 00:22:37.384 "identify_ctrlr": false 00:22:37.384 } 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_set_max_subsystems", 00:22:37.384 "params": { 00:22:37.384 "max_subsystems": 1024 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_set_crdt", 00:22:37.384 "params": { 00:22:37.384 "crdt1": 0, 00:22:37.384 "crdt2": 0, 00:22:37.384 "crdt3": 0 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_create_transport", 00:22:37.384 "params": { 00:22:37.384 "trtype": "TCP", 00:22:37.384 "max_queue_depth": 128, 00:22:37.384 "max_io_qpairs_per_ctrlr": 127, 00:22:37.384 "in_capsule_data_size": 4096, 00:22:37.384 "max_io_size": 131072, 00:22:37.384 "io_unit_size": 131072, 00:22:37.384 "max_aq_depth": 128, 00:22:37.384 "num_shared_buffers": 511, 00:22:37.384 "buf_cache_size": 4294967295, 00:22:37.384 "dif_insert_or_strip": false, 00:22:37.384 "zcopy": false, 00:22:37.384 "c2h_success": false, 00:22:37.384 "sock_priority": 0, 00:22:37.384 "abort_timeout_sec": 1, 00:22:37.384 "ack_timeout": 0, 00:22:37.384 "data_wr_pool_size": 0 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_create_subsystem", 00:22:37.384 "params": { 00:22:37.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.384 "allow_any_host": false, 00:22:37.384 "serial_number": "SPDK00000000000001", 00:22:37.384 "model_number": "SPDK bdev Controller", 00:22:37.384 "max_namespaces": 10, 00:22:37.384 "min_cntlid": 1, 00:22:37.384 "max_cntlid": 65519, 00:22:37.384 "ana_reporting": false 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_subsystem_add_host", 00:22:37.384 "params": { 00:22:37.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.384 "host": "nqn.2016-06.io.spdk:host1", 00:22:37.384 "psk": "/tmp/tmp.IUaNKgUjwc" 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_subsystem_add_ns", 00:22:37.384 "params": { 00:22:37.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.384 "namespace": { 00:22:37.384 "nsid": 1, 00:22:37.384 "bdev_name": "malloc0", 00:22:37.384 "nguid": "1426395578B04539950ACFFA95C052D3", 00:22:37.384 "uuid": "14263955-78b0-4539-950a-cffa95c052d3", 00:22:37.384 "no_auto_visible": false 00:22:37.384 } 00:22:37.384 } 00:22:37.384 }, 00:22:37.384 { 00:22:37.384 "method": "nvmf_subsystem_add_listener", 00:22:37.384 "params": { 00:22:37.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.384 "listen_address": { 00:22:37.384 "trtype": "TCP", 00:22:37.384 "adrfam": "IPv4", 00:22:37.384 "traddr": "10.0.0.2", 00:22:37.384 "trsvcid": "4420" 00:22:37.384 }, 00:22:37.384 "secure_channel": true 00:22:37.384 } 00:22:37.384 } 00:22:37.384 ] 00:22:37.384 } 00:22:37.384 ] 00:22:37.384 }' 00:22:37.384 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:37.642 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:37.642 "subsystems": [ 00:22:37.642 { 00:22:37.642 "subsystem": "keyring", 00:22:37.642 "config": [] 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "subsystem": "iobuf", 00:22:37.642 "config": [ 00:22:37.642 { 00:22:37.642 "method": "iobuf_set_options", 00:22:37.642 "params": { 00:22:37.642 "small_pool_count": 8192, 00:22:37.642 "large_pool_count": 1024, 00:22:37.642 "small_bufsize": 8192, 00:22:37.642 "large_bufsize": 135168 00:22:37.642 } 00:22:37.642 } 00:22:37.642 ] 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "subsystem": "sock", 00:22:37.642 "config": [ 00:22:37.642 { 00:22:37.642 "method": "sock_set_default_impl", 00:22:37.642 "params": { 00:22:37.642 "impl_name": "posix" 00:22:37.642 } 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "method": "sock_impl_set_options", 00:22:37.642 "params": { 00:22:37.642 "impl_name": "ssl", 00:22:37.642 "recv_buf_size": 4096, 00:22:37.642 "send_buf_size": 4096, 00:22:37.642 "enable_recv_pipe": true, 00:22:37.642 "enable_quickack": false, 00:22:37.642 "enable_placement_id": 0, 00:22:37.642 "enable_zerocopy_send_server": true, 00:22:37.642 "enable_zerocopy_send_client": false, 00:22:37.642 "zerocopy_threshold": 0, 00:22:37.642 "tls_version": 0, 00:22:37.642 "enable_ktls": false 00:22:37.642 } 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "method": "sock_impl_set_options", 00:22:37.642 "params": { 00:22:37.642 "impl_name": "posix", 00:22:37.642 "recv_buf_size": 2097152, 00:22:37.642 "send_buf_size": 2097152, 00:22:37.642 "enable_recv_pipe": true, 00:22:37.642 "enable_quickack": false, 00:22:37.642 "enable_placement_id": 0, 00:22:37.642 "enable_zerocopy_send_server": true, 00:22:37.642 "enable_zerocopy_send_client": false, 00:22:37.642 "zerocopy_threshold": 0, 00:22:37.642 "tls_version": 0, 00:22:37.642 "enable_ktls": false 00:22:37.642 } 00:22:37.642 } 00:22:37.642 ] 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "subsystem": "vmd", 00:22:37.642 "config": [] 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "subsystem": "accel", 00:22:37.642 "config": [ 00:22:37.642 { 00:22:37.642 "method": "accel_set_options", 00:22:37.642 "params": { 00:22:37.642 "small_cache_size": 128, 00:22:37.642 "large_cache_size": 16, 00:22:37.642 "task_count": 2048, 00:22:37.642 "sequence_count": 2048, 00:22:37.642 "buf_count": 2048 00:22:37.642 } 00:22:37.642 } 00:22:37.642 ] 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "subsystem": "bdev", 00:22:37.642 "config": [ 00:22:37.642 { 00:22:37.642 "method": "bdev_set_options", 00:22:37.642 "params": { 00:22:37.642 "bdev_io_pool_size": 65535, 00:22:37.642 "bdev_io_cache_size": 256, 00:22:37.642 "bdev_auto_examine": true, 00:22:37.642 "iobuf_small_cache_size": 128, 00:22:37.642 "iobuf_large_cache_size": 16 00:22:37.642 } 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "method": "bdev_raid_set_options", 00:22:37.642 "params": { 00:22:37.642 "process_window_size_kb": 1024 00:22:37.642 } 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "method": "bdev_iscsi_set_options", 00:22:37.642 "params": { 00:22:37.642 "timeout_sec": 30 00:22:37.642 } 00:22:37.642 }, 00:22:37.642 { 00:22:37.642 "method": "bdev_nvme_set_options", 00:22:37.642 "params": { 00:22:37.642 "action_on_timeout": "none", 00:22:37.642 "timeout_us": 0, 00:22:37.642 "timeout_admin_us": 0, 00:22:37.642 "keep_alive_timeout_ms": 10000, 00:22:37.642 "arbitration_burst": 0, 00:22:37.642 "low_priority_weight": 0, 00:22:37.642 "medium_priority_weight": 0, 00:22:37.642 "high_priority_weight": 0, 00:22:37.642 "nvme_adminq_poll_period_us": 10000, 00:22:37.642 "nvme_ioq_poll_period_us": 0, 00:22:37.642 "io_queue_requests": 512, 00:22:37.642 "delay_cmd_submit": true, 00:22:37.642 "transport_retry_count": 4, 00:22:37.642 "bdev_retry_count": 3, 00:22:37.642 "transport_ack_timeout": 0, 00:22:37.642 "ctrlr_loss_timeout_sec": 0, 00:22:37.642 "reconnect_delay_sec": 0, 00:22:37.642 "fast_io_fail_timeout_sec": 0, 00:22:37.642 "disable_auto_failback": false, 00:22:37.642 "generate_uuids": false, 00:22:37.642 "transport_tos": 0, 00:22:37.642 "nvme_error_stat": false, 00:22:37.642 "rdma_srq_size": 0, 00:22:37.642 "io_path_stat": false, 00:22:37.642 "allow_accel_sequence": false, 00:22:37.642 "rdma_max_cq_size": 0, 00:22:37.642 "rdma_cm_event_timeout_ms": 0, 00:22:37.643 "dhchap_digests": [ 00:22:37.643 "sha256", 00:22:37.643 "sha384", 00:22:37.643 "sha512" 00:22:37.643 ], 00:22:37.643 "dhchap_dhgroups": [ 00:22:37.643 "null", 00:22:37.643 "ffdhe2048", 00:22:37.643 "ffdhe3072", 00:22:37.643 "ffdhe4096", 00:22:37.643 "ffdhe6144", 00:22:37.643 "ffdhe8192" 00:22:37.643 ] 00:22:37.643 } 00:22:37.643 }, 00:22:37.643 { 00:22:37.643 "method": "bdev_nvme_attach_controller", 00:22:37.643 "params": { 00:22:37.643 "name": "TLSTEST", 00:22:37.643 "trtype": "TCP", 00:22:37.643 "adrfam": "IPv4", 00:22:37.643 "traddr": "10.0.0.2", 00:22:37.643 "trsvcid": "4420", 00:22:37.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.643 "prchk_reftag": false, 00:22:37.643 "prchk_guard": false, 00:22:37.643 "ctrlr_loss_timeout_sec": 0, 00:22:37.643 "reconnect_delay_sec": 0, 00:22:37.643 "fast_io_fail_timeout_sec": 0, 00:22:37.643 "psk": "/tmp/tmp.IUaNKgUjwc", 00:22:37.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.643 "hdgst": false, 00:22:37.643 "ddgst": false 00:22:37.643 } 00:22:37.643 }, 00:22:37.643 { 00:22:37.643 "method": "bdev_nvme_set_hotplug", 00:22:37.643 "params": { 00:22:37.643 "period_us": 100000, 00:22:37.643 "enable": false 00:22:37.643 } 00:22:37.643 }, 00:22:37.643 { 00:22:37.643 "method": "bdev_wait_for_examine" 00:22:37.643 } 00:22:37.643 ] 00:22:37.643 }, 00:22:37.643 { 00:22:37.643 "subsystem": "nbd", 00:22:37.643 "config": [] 00:22:37.643 } 00:22:37.643 ] 00:22:37.643 }' 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3616342 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3616342 ']' 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3616342 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.643 13:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3616342 00:22:37.643 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.643 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.643 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3616342' 00:22:37.643 killing process with pid 3616342 00:22:37.643 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3616342 00:22:37.643 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.643 00:22:37.643 Latency(us) 00:22:37.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.643 =================================================================================================================== 00:22:37.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.643 [2024-07-12 13:30:35.007436] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.643 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3616342 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3616067 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3616067 ']' 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3616067 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3616067 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3616067' 00:22:37.900 killing process with pid 3616067 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3616067 00:22:37.900 [2024-07-12 13:30:35.260693] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.900 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3616067 00:22:38.158 13:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:38.158 13:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.158 13:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:38.158 "subsystems": [ 00:22:38.158 { 00:22:38.158 "subsystem": "keyring", 00:22:38.158 "config": [] 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "subsystem": "iobuf", 00:22:38.158 "config": [ 00:22:38.158 { 00:22:38.158 "method": "iobuf_set_options", 00:22:38.158 "params": { 00:22:38.158 "small_pool_count": 8192, 00:22:38.158 "large_pool_count": 1024, 00:22:38.158 "small_bufsize": 8192, 00:22:38.158 "large_bufsize": 135168 00:22:38.158 } 00:22:38.158 } 00:22:38.158 ] 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "subsystem": "sock", 00:22:38.158 "config": [ 00:22:38.158 { 00:22:38.158 "method": "sock_set_default_impl", 00:22:38.158 "params": { 00:22:38.158 "impl_name": "posix" 00:22:38.158 } 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "method": "sock_impl_set_options", 00:22:38.158 "params": { 00:22:38.158 "impl_name": "ssl", 00:22:38.158 "recv_buf_size": 4096, 00:22:38.158 "send_buf_size": 4096, 00:22:38.158 "enable_recv_pipe": true, 00:22:38.158 "enable_quickack": false, 00:22:38.158 "enable_placement_id": 0, 00:22:38.158 "enable_zerocopy_send_server": true, 00:22:38.158 "enable_zerocopy_send_client": false, 00:22:38.158 "zerocopy_threshold": 0, 00:22:38.158 "tls_version": 0, 00:22:38.158 "enable_ktls": false 00:22:38.158 } 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "method": "sock_impl_set_options", 00:22:38.158 "params": { 00:22:38.158 "impl_name": "posix", 00:22:38.158 "recv_buf_size": 2097152, 00:22:38.158 "send_buf_size": 2097152, 00:22:38.158 "enable_recv_pipe": true, 00:22:38.158 "enable_quickack": false, 00:22:38.158 "enable_placement_id": 0, 00:22:38.158 "enable_zerocopy_send_server": true, 00:22:38.158 "enable_zerocopy_send_client": false, 00:22:38.158 "zerocopy_threshold": 0, 00:22:38.158 "tls_version": 0, 00:22:38.158 "enable_ktls": false 00:22:38.158 } 00:22:38.158 } 00:22:38.158 ] 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "subsystem": "vmd", 00:22:38.158 "config": [] 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "subsystem": "accel", 00:22:38.158 "config": [ 00:22:38.158 { 00:22:38.158 "method": "accel_set_options", 00:22:38.158 "params": { 00:22:38.158 "small_cache_size": 128, 00:22:38.158 "large_cache_size": 16, 00:22:38.158 "task_count": 2048, 00:22:38.158 "sequence_count": 2048, 00:22:38.158 "buf_count": 2048 00:22:38.158 } 00:22:38.158 } 00:22:38.158 ] 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "subsystem": "bdev", 00:22:38.158 "config": [ 00:22:38.158 { 00:22:38.158 "method": "bdev_set_options", 00:22:38.158 "params": { 00:22:38.158 "bdev_io_pool_size": 65535, 00:22:38.158 "bdev_io_cache_size": 256, 00:22:38.158 "bdev_auto_examine": true, 00:22:38.158 "iobuf_small_cache_size": 128, 00:22:38.158 "iobuf_large_cache_size": 16 00:22:38.158 } 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "method": "bdev_raid_set_options", 00:22:38.158 "params": { 00:22:38.158 "process_window_size_kb": 1024 00:22:38.158 } 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "method": "bdev_iscsi_set_options", 00:22:38.158 "params": { 00:22:38.158 "timeout_sec": 30 00:22:38.158 } 00:22:38.158 }, 00:22:38.158 { 00:22:38.158 "method": "bdev_nvme_set_options", 00:22:38.158 "params": { 00:22:38.158 "action_on_timeout": "none", 00:22:38.158 "timeout_us": 0, 00:22:38.158 "timeout_admin_us": 0, 00:22:38.158 "keep_alive_timeout_ms": 10000, 00:22:38.158 "arbitration_burst": 0, 00:22:38.158 "low_priority_weight": 0, 00:22:38.158 "medium_priority_weight": 0, 00:22:38.158 "high_priority_weight": 0, 00:22:38.158 "nvme_adminq_poll_period_us": 10000, 00:22:38.158 "nvme_ioq_poll_period_us": 0, 00:22:38.158 "io_queue_requests": 0, 00:22:38.158 "delay_cmd_submit": true, 00:22:38.158 "transport_retry_count": 4, 00:22:38.158 "bdev_retry_count": 3, 00:22:38.158 "transport_ack_timeout": 0, 00:22:38.158 "ctrlr_loss_timeout_sec": 0, 00:22:38.158 "reconnect_delay_sec": 0, 00:22:38.158 "fast_io_fail_timeout_sec": 0, 00:22:38.158 "disable_auto_failback": false, 00:22:38.158 "generate_uuids": false, 00:22:38.158 "transport_tos": 0, 00:22:38.158 "nvme_error_stat": false, 00:22:38.158 "rdma_srq_size": 0, 00:22:38.158 "io_path_stat": false, 00:22:38.158 "allow_accel_sequence": false, 00:22:38.158 "rdma_max_cq_size": 0, 00:22:38.158 "rdma_cm_event_timeout_ms": 0, 00:22:38.158 "dhchap_digests": [ 00:22:38.158 "sha256", 00:22:38.159 "sha384", 00:22:38.159 "sha512" 00:22:38.159 ], 00:22:38.159 "dhchap_dhgroups": [ 00:22:38.159 "null", 00:22:38.159 "ffdhe2048", 00:22:38.159 "ffdhe3072", 00:22:38.159 "ffdhe4096", 00:22:38.159 "ffdhe6144", 00:22:38.159 "ffdhe8192" 00:22:38.159 ] 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "bdev_nvme_set_hotplug", 00:22:38.159 "params": { 00:22:38.159 "period_us": 100000, 00:22:38.159 "enable": false 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "bdev_malloc_create", 00:22:38.159 "params": { 00:22:38.159 "name": "malloc0", 00:22:38.159 "num_blocks": 8192, 00:22:38.159 "block_size": 4096, 00:22:38.159 "physical_block_size": 4096, 00:22:38.159 "uuid": "14263955-78b0-4539-950a-cffa95c052d3", 00:22:38.159 "optimal_io_boundary": 0 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "bdev_wait_for_examine" 00:22:38.159 } 00:22:38.159 ] 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "subsystem": "nbd", 00:22:38.159 "config": [] 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "subsystem": "scheduler", 00:22:38.159 "config": [ 00:22:38.159 { 00:22:38.159 "method": "framework_set_scheduler", 00:22:38.159 "params": { 00:22:38.159 "name": "static" 00:22:38.159 } 00:22:38.159 } 00:22:38.159 ] 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "subsystem": "nvmf", 00:22:38.159 "config": [ 00:22:38.159 { 00:22:38.159 "method": "nvmf_set_config", 00:22:38.159 "params": { 00:22:38.159 "discovery_filter": "match_any", 00:22:38.159 "admin_cmd_passthru": { 00:22:38.159 "identify_ctrlr": false 00:22:38.159 } 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_set_max_subsystems", 00:22:38.159 "params": { 00:22:38.159 "max_subsystems": 1024 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_set_crdt", 00:22:38.159 "params": { 00:22:38.159 "crdt1": 0, 00:22:38.159 "crdt2": 0, 00:22:38.159 "crdt3": 0 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_create_transport", 00:22:38.159 "params": { 00:22:38.159 "trtype": "TCP", 00:22:38.159 "max_queue_depth": 128, 00:22:38.159 "max_io_qpairs_per_ctrlr": 127, 00:22:38.159 "in_capsule_data_size": 4096, 00:22:38.159 "max_io_size": 131072, 00:22:38.159 "io_unit_size": 131072, 00:22:38.159 "max_aq_depth": 128, 00:22:38.159 "num_shared_buffers": 511, 00:22:38.159 "buf_cache_size": 4294967295, 00:22:38.159 "dif_insert_or_strip": false, 00:22:38.159 "zcopy": false, 00:22:38.159 "c2h_success": false, 00:22:38.159 "sock_priority": 0, 00:22:38.159 "abort_timeout_sec": 1, 00:22:38.159 "ack_timeout": 0, 00:22:38.159 "data_wr_pool_size": 0 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_create_subsystem", 00:22:38.159 "params": { 00:22:38.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.159 "allow_any_host": false, 00:22:38.159 "serial_number": "SPDK00000000000001", 00:22:38.159 "model_number": "SPDK bdev Controller", 00:22:38.159 "max_namespaces": 10, 00:22:38.159 "min_cntlid": 1, 00:22:38.159 "max_cntlid": 65519, 00:22:38.159 "ana_reporting": false 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_subsystem_add_host", 00:22:38.159 "params": { 00:22:38.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.159 "host": "nqn.2016-06.io.spdk:host1", 00:22:38.159 "psk": "/tmp/tmp.IUaNKgUjwc" 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_subsystem_add_ns", 00:22:38.159 "params": { 00:22:38.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.159 "namespace": { 00:22:38.159 "nsid": 1, 00:22:38.159 "bdev_name": "malloc0", 00:22:38.159 "nguid": "1426395578B04539950ACFFA95C052D3", 00:22:38.159 "uuid": "14263955-78b0-4539-950a-cffa95c052d3", 00:22:38.159 "no_auto_visible": false 00:22:38.159 } 00:22:38.159 } 00:22:38.159 }, 00:22:38.159 { 00:22:38.159 "method": "nvmf_subsystem_add_listener", 00:22:38.159 "params": { 00:22:38.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.159 "listen_address": { 00:22:38.159 "trtype": "TCP", 00:22:38.159 "adrfam": "IPv4", 00:22:38.159 "traddr": "10.0.0.2", 00:22:38.159 "trsvcid": "4420" 00:22:38.159 }, 00:22:38.159 "secure_channel": true 00:22:38.159 } 00:22:38.159 } 00:22:38.159 ] 00:22:38.159 } 00:22:38.159 ] 00:22:38.159 }' 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3616610 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3616610 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3616610 ']' 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.159 13:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.159 [2024-07-12 13:30:35.556370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:38.159 [2024-07-12 13:30:35.556453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.159 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.159 [2024-07-12 13:30:35.592976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.159 [2024-07-12 13:30:35.620102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.417 [2024-07-12 13:30:35.704224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.417 [2024-07-12 13:30:35.704274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.417 [2024-07-12 13:30:35.704302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.417 [2024-07-12 13:30:35.704313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.417 [2024-07-12 13:30:35.704331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.417 [2024-07-12 13:30:35.704426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.675 [2024-07-12 13:30:35.932491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.675 [2024-07-12 13:30:35.948429] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.675 [2024-07-12 13:30:35.964480] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.675 [2024-07-12 13:30:35.975512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3616761 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3616761 /var/tmp/bdevperf.sock 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3616761 ']' 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.263 13:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:39.263 "subsystems": [ 00:22:39.263 { 00:22:39.263 "subsystem": "keyring", 00:22:39.263 "config": [] 00:22:39.263 }, 00:22:39.263 { 00:22:39.263 "subsystem": "iobuf", 00:22:39.263 "config": [ 00:22:39.263 { 00:22:39.263 "method": "iobuf_set_options", 00:22:39.263 "params": { 00:22:39.263 "small_pool_count": 8192, 00:22:39.263 "large_pool_count": 1024, 00:22:39.263 "small_bufsize": 8192, 00:22:39.263 "large_bufsize": 135168 00:22:39.263 } 00:22:39.263 } 00:22:39.263 ] 00:22:39.263 }, 00:22:39.263 { 00:22:39.263 "subsystem": "sock", 00:22:39.263 "config": [ 00:22:39.263 { 00:22:39.263 "method": "sock_set_default_impl", 00:22:39.263 "params": { 00:22:39.263 "impl_name": "posix" 00:22:39.263 } 00:22:39.263 }, 00:22:39.263 { 00:22:39.263 "method": "sock_impl_set_options", 00:22:39.263 "params": { 00:22:39.263 "impl_name": "ssl", 00:22:39.263 "recv_buf_size": 4096, 00:22:39.263 "send_buf_size": 4096, 00:22:39.263 "enable_recv_pipe": true, 00:22:39.263 "enable_quickack": false, 00:22:39.263 "enable_placement_id": 0, 00:22:39.263 "enable_zerocopy_send_server": true, 00:22:39.263 "enable_zerocopy_send_client": false, 00:22:39.263 "zerocopy_threshold": 0, 00:22:39.263 "tls_version": 0, 00:22:39.263 "enable_ktls": false 00:22:39.263 } 00:22:39.263 }, 00:22:39.263 { 00:22:39.263 "method": "sock_impl_set_options", 00:22:39.263 "params": { 00:22:39.263 "impl_name": "posix", 00:22:39.263 "recv_buf_size": 2097152, 00:22:39.263 "send_buf_size": 2097152, 00:22:39.264 "enable_recv_pipe": true, 00:22:39.264 "enable_quickack": false, 00:22:39.264 "enable_placement_id": 0, 00:22:39.264 "enable_zerocopy_send_server": true, 00:22:39.264 "enable_zerocopy_send_client": false, 00:22:39.264 "zerocopy_threshold": 0, 00:22:39.264 "tls_version": 0, 00:22:39.264 "enable_ktls": false 00:22:39.264 } 00:22:39.264 } 00:22:39.264 ] 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "subsystem": "vmd", 00:22:39.264 "config": [] 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "subsystem": "accel", 00:22:39.264 "config": [ 00:22:39.264 { 00:22:39.264 "method": "accel_set_options", 00:22:39.264 "params": { 00:22:39.264 "small_cache_size": 128, 00:22:39.264 "large_cache_size": 16, 00:22:39.264 "task_count": 2048, 00:22:39.264 "sequence_count": 2048, 00:22:39.264 "buf_count": 2048 00:22:39.264 } 00:22:39.264 } 00:22:39.264 ] 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "subsystem": "bdev", 00:22:39.264 "config": [ 00:22:39.264 { 00:22:39.264 "method": "bdev_set_options", 00:22:39.264 "params": { 00:22:39.264 "bdev_io_pool_size": 65535, 00:22:39.264 "bdev_io_cache_size": 256, 00:22:39.264 "bdev_auto_examine": true, 00:22:39.264 "iobuf_small_cache_size": 128, 00:22:39.264 "iobuf_large_cache_size": 16 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_raid_set_options", 00:22:39.264 "params": { 00:22:39.264 "process_window_size_kb": 1024 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_iscsi_set_options", 00:22:39.264 "params": { 00:22:39.264 "timeout_sec": 30 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_nvme_set_options", 00:22:39.264 "params": { 00:22:39.264 "action_on_timeout": "none", 00:22:39.264 "timeout_us": 0, 00:22:39.264 "timeout_admin_us": 0, 00:22:39.264 "keep_alive_timeout_ms": 10000, 00:22:39.264 "arbitration_burst": 0, 00:22:39.264 "low_priority_weight": 0, 00:22:39.264 "medium_priority_weight": 0, 00:22:39.264 "high_priority_weight": 0, 00:22:39.264 "nvme_adminq_poll_period_us": 10000, 00:22:39.264 "nvme_ioq_poll_period_us": 0, 00:22:39.264 "io_queue_requests": 512, 00:22:39.264 "delay_cmd_submit": true, 00:22:39.264 "transport_retry_count": 4, 00:22:39.264 "bdev_retry_count": 3, 00:22:39.264 "transport_ack_timeout": 0, 00:22:39.264 "ctrlr_loss_timeout_sec": 0, 00:22:39.264 "reconnect_delay_sec": 0, 00:22:39.264 "fast_io_fail_timeout_sec": 0, 00:22:39.264 "disable_auto_failback": false, 00:22:39.264 "generate_uuids": false, 00:22:39.264 "transport_tos": 0, 00:22:39.264 "nvme_error_stat": false, 00:22:39.264 "rdma_srq_size": 0, 00:22:39.264 "io_path_stat": false, 00:22:39.264 "allow_accel_sequence": false, 00:22:39.264 "rdma_max_cq_size": 0, 00:22:39.264 "rdma_cm_event_timeout_ms": 0, 00:22:39.264 "dhchap_digests": [ 00:22:39.264 "sha256", 00:22:39.264 "sha384", 00:22:39.264 "sha512" 00:22:39.264 ], 00:22:39.264 "dhchap_dhgroups": [ 00:22:39.264 "null", 00:22:39.264 "ffdhe2048", 00:22:39.264 "ffdhe3072", 00:22:39.264 "ffdhe4096", 00:22:39.264 "ffdhe6144", 00:22:39.264 "ffdhe8192" 00:22:39.264 ] 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_nvme_attach_controller", 00:22:39.264 "params": { 00:22:39.264 "name": "TLSTEST", 00:22:39.264 "trtype": "TCP", 00:22:39.264 "adrfam": "IPv4", 00:22:39.264 "traddr": "10.0.0.2", 00:22:39.264 "trsvcid": "4420", 00:22:39.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.264 "prchk_reftag": false, 00:22:39.264 "prchk_guard": false, 00:22:39.264 "ctrlr_loss_timeout_sec": 0, 00:22:39.264 "reconnect_delay_sec": 0, 00:22:39.264 "fast_io_fail_timeout_sec": 0, 00:22:39.264 "psk": "/tmp/tmp.IUaNKgUjwc", 00:22:39.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.264 "hdgst": false, 00:22:39.264 "ddgst": false 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_nvme_set_hotplug", 00:22:39.264 "params": { 00:22:39.264 "period_us": 100000, 00:22:39.264 "enable": false 00:22:39.264 } 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "method": "bdev_wait_for_examine" 00:22:39.264 } 00:22:39.264 ] 00:22:39.264 }, 00:22:39.264 { 00:22:39.264 "subsystem": "nbd", 00:22:39.264 "config": [] 00:22:39.264 } 00:22:39.264 ] 00:22:39.264 }' 00:22:39.264 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.264 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.264 13:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.264 [2024-07-12 13:30:36.606007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:39.264 [2024-07-12 13:30:36.606086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616761 ] 00:22:39.264 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.264 [2024-07-12 13:30:36.637526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.264 [2024-07-12 13:30:36.664045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.522 [2024-07-12 13:30:36.747858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.522 [2024-07-12 13:30:36.916236] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.522 [2024-07-12 13:30:36.916412] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:40.090 13:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.090 13:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.090 13:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:40.347 Running I/O for 10 seconds... 00:22:50.307 00:22:50.307 Latency(us) 00:22:50.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.307 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.307 Verification LBA range: start 0x0 length 0x2000 00:22:50.307 TLSTESTn1 : 10.05 2082.90 8.14 0.00 0.00 61284.13 8107.05 90876.59 00:22:50.307 =================================================================================================================== 00:22:50.307 Total : 2082.90 8.14 0.00 0.00 61284.13 8107.05 90876.59 00:22:50.307 0 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3616761 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3616761 ']' 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3616761 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3616761 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3616761' 00:22:50.307 killing process with pid 3616761 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3616761 00:22:50.307 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.307 00:22:50.307 Latency(us) 00:22:50.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.307 =================================================================================================================== 00:22:50.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.307 [2024-07-12 13:30:47.772592] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.307 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3616761 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3616610 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3616610 ']' 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3616610 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.565 13:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3616610 00:22:50.565 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:50.565 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:50.565 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3616610' 00:22:50.565 killing process with pid 3616610 00:22:50.565 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3616610 00:22:50.565 [2024-07-12 13:30:48.027466] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.565 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3616610 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3618090 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3618090 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3618090 ']' 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.822 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.080 [2024-07-12 13:30:48.326700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:51.080 [2024-07-12 13:30:48.326782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.080 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.080 [2024-07-12 13:30:48.361851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:51.080 [2024-07-12 13:30:48.388418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.080 [2024-07-12 13:30:48.471362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.080 [2024-07-12 13:30:48.471427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.080 [2024-07-12 13:30:48.471439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.080 [2024-07-12 13:30:48.471450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.080 [2024-07-12 13:30:48.471472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.080 [2024-07-12 13:30:48.471505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IUaNKgUjwc 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IUaNKgUjwc 00:22:51.337 13:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.594 [2024-07-12 13:30:48.830606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.594 13:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.851 13:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.107 [2024-07-12 13:30:49.416139] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.107 [2024-07-12 13:30:49.416382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.107 13:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.364 malloc0 00:22:52.364 13:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.622 13:30:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IUaNKgUjwc 00:22:52.880 [2024-07-12 13:30:50.208825] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3618374 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3618374 /var/tmp/bdevperf.sock 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3618374 ']' 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.880 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.880 [2024-07-12 13:30:50.266188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:52.880 [2024-07-12 13:30:50.266258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618374 ] 00:22:52.880 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.880 [2024-07-12 13:30:50.297110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.880 [2024-07-12 13:30:50.324358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.137 [2024-07-12 13:30:50.410055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.137 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.137 13:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:53.137 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IUaNKgUjwc 00:22:53.394 13:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.652 [2024-07-12 13:30:50.982328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.652 nvme0n1 00:22:53.652 13:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.909 Running I/O for 1 seconds... 00:22:54.841 00:22:54.841 Latency(us) 00:22:54.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.841 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.841 Verification LBA range: start 0x0 length 0x2000 00:22:54.841 nvme0n1 : 1.05 2137.04 8.35 0.00 0.00 58594.96 6796.33 86604.61 00:22:54.841 =================================================================================================================== 00:22:54.841 Total : 2137.04 8.35 0.00 0.00 58594.96 6796.33 86604.61 00:22:54.841 0 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3618374 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3618374 ']' 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3618374 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3618374 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3618374' 00:22:54.841 killing process with pid 3618374 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3618374 00:22:54.841 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.841 00:22:54.841 Latency(us) 00:22:54.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.841 =================================================================================================================== 00:22:54.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.841 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3618374 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3618090 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3618090 ']' 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3618090 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3618090 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3618090' 00:22:55.099 killing process with pid 3618090 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3618090 00:22:55.099 [2024-07-12 13:30:52.530347] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.099 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3618090 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3618654 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3618654 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3618654 ']' 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.358 13:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.358 [2024-07-12 13:30:52.824572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:55.358 [2024-07-12 13:30:52.824686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.616 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.616 [2024-07-12 13:30:52.861979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.616 [2024-07-12 13:30:52.887531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.616 [2024-07-12 13:30:52.964615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.616 [2024-07-12 13:30:52.964668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.616 [2024-07-12 13:30:52.964696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.616 [2024-07-12 13:30:52.964708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.616 [2024-07-12 13:30:52.964717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.616 [2024-07-12 13:30:52.964744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.616 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.616 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:55.616 13:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.616 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.616 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.872 [2024-07-12 13:30:53.101030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.872 malloc0 00:22:55.872 [2024-07-12 13:30:53.132844] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.872 [2024-07-12 13:30:53.133085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3618682 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3618682 /var/tmp/bdevperf.sock 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3618682 ']' 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.872 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.873 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.873 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.873 [2024-07-12 13:30:53.205917] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:55.873 [2024-07-12 13:30:53.205989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618682 ] 00:22:55.873 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.873 [2024-07-12 13:30:53.237983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.873 [2024-07-12 13:30:53.266393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.130 [2024-07-12 13:30:53.354391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.130 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.130 13:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:56.130 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IUaNKgUjwc 00:22:56.387 13:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:56.646 [2024-07-12 13:30:54.024935] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.646 nvme0n1 00:22:56.646 13:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.943 Running I/O for 1 seconds... 00:22:57.886 00:22:57.886 Latency(us) 00:22:57.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.886 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:57.886 Verification LBA range: start 0x0 length 0x2000 00:22:57.886 nvme0n1 : 1.04 2768.91 10.82 0.00 0.00 45333.37 6699.24 70293.43 00:22:57.886 =================================================================================================================== 00:22:57.886 Total : 2768.91 10.82 0.00 0.00 45333.37 6699.24 70293.43 00:22:57.886 0 00:22:57.886 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:57.886 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.886 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.143 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.143 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:58.143 "subsystems": [ 00:22:58.143 { 00:22:58.143 "subsystem": "keyring", 00:22:58.143 "config": [ 00:22:58.143 { 00:22:58.143 "method": "keyring_file_add_key", 00:22:58.143 "params": { 00:22:58.143 "name": "key0", 00:22:58.143 "path": "/tmp/tmp.IUaNKgUjwc" 00:22:58.143 } 00:22:58.143 } 00:22:58.143 ] 00:22:58.143 }, 00:22:58.143 { 00:22:58.143 "subsystem": "iobuf", 00:22:58.143 "config": [ 00:22:58.143 { 00:22:58.143 "method": "iobuf_set_options", 00:22:58.143 "params": { 00:22:58.143 "small_pool_count": 8192, 00:22:58.143 "large_pool_count": 1024, 00:22:58.143 "small_bufsize": 8192, 00:22:58.143 "large_bufsize": 135168 00:22:58.143 } 00:22:58.143 } 00:22:58.143 ] 00:22:58.143 }, 00:22:58.143 { 00:22:58.143 "subsystem": "sock", 00:22:58.143 "config": [ 00:22:58.143 { 00:22:58.143 "method": "sock_set_default_impl", 00:22:58.143 "params": { 00:22:58.143 "impl_name": "posix" 00:22:58.143 } 00:22:58.143 }, 00:22:58.143 { 00:22:58.143 "method": "sock_impl_set_options", 00:22:58.143 "params": { 00:22:58.143 "impl_name": "ssl", 00:22:58.143 "recv_buf_size": 4096, 00:22:58.143 "send_buf_size": 4096, 00:22:58.143 "enable_recv_pipe": true, 00:22:58.143 "enable_quickack": false, 00:22:58.143 "enable_placement_id": 0, 00:22:58.143 "enable_zerocopy_send_server": true, 00:22:58.143 "enable_zerocopy_send_client": false, 00:22:58.143 "zerocopy_threshold": 0, 00:22:58.143 "tls_version": 0, 00:22:58.143 "enable_ktls": false 00:22:58.143 } 00:22:58.143 }, 00:22:58.143 { 00:22:58.143 "method": "sock_impl_set_options", 00:22:58.143 "params": { 00:22:58.143 "impl_name": "posix", 00:22:58.143 "recv_buf_size": 2097152, 00:22:58.143 "send_buf_size": 2097152, 00:22:58.143 "enable_recv_pipe": true, 00:22:58.143 "enable_quickack": false, 00:22:58.143 "enable_placement_id": 0, 00:22:58.143 "enable_zerocopy_send_server": true, 00:22:58.143 "enable_zerocopy_send_client": false, 00:22:58.143 "zerocopy_threshold": 0, 00:22:58.143 "tls_version": 0, 00:22:58.143 "enable_ktls": false 00:22:58.143 } 00:22:58.143 } 00:22:58.143 ] 00:22:58.143 }, 00:22:58.144 { 00:22:58.144 "subsystem": "vmd", 00:22:58.144 "config": [] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "accel", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "accel_set_options", 00:22:58.144 "params": { 00:22:58.144 "small_cache_size": 128, 00:22:58.144 "large_cache_size": 16, 00:22:58.144 "task_count": 2048, 00:22:58.144 "sequence_count": 2048, 00:22:58.144 "buf_count": 2048 00:22:58.144 } 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "bdev", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "bdev_set_options", 00:22:58.144 "params": { 00:22:58.144 "bdev_io_pool_size": 65535, 00:22:58.144 "bdev_io_cache_size": 256, 00:22:58.144 "bdev_auto_examine": true, 00:22:58.144 "iobuf_small_cache_size": 128, 00:22:58.144 "iobuf_large_cache_size": 16 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_raid_set_options", 00:22:58.144 "params": { 00:22:58.144 "process_window_size_kb": 1024 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_iscsi_set_options", 00:22:58.144 "params": { 00:22:58.144 "timeout_sec": 30 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_nvme_set_options", 00:22:58.144 "params": { 00:22:58.144 "action_on_timeout": "none", 00:22:58.144 "timeout_us": 0, 00:22:58.144 "timeout_admin_us": 0, 00:22:58.144 "keep_alive_timeout_ms": 10000, 00:22:58.144 "arbitration_burst": 0, 00:22:58.144 "low_priority_weight": 0, 00:22:58.144 "medium_priority_weight": 0, 00:22:58.144 "high_priority_weight": 0, 00:22:58.144 "nvme_adminq_poll_period_us": 10000, 00:22:58.144 "nvme_ioq_poll_period_us": 0, 00:22:58.144 "io_queue_requests": 0, 00:22:58.144 "delay_cmd_submit": true, 00:22:58.144 "transport_retry_count": 4, 00:22:58.144 "bdev_retry_count": 3, 00:22:58.144 "transport_ack_timeout": 0, 00:22:58.144 "ctrlr_loss_timeout_sec": 0, 00:22:58.144 "reconnect_delay_sec": 0, 00:22:58.144 "fast_io_fail_timeout_sec": 0, 00:22:58.144 "disable_auto_failback": false, 00:22:58.144 "generate_uuids": false, 00:22:58.144 "transport_tos": 0, 00:22:58.144 "nvme_error_stat": false, 00:22:58.144 "rdma_srq_size": 0, 00:22:58.144 "io_path_stat": false, 00:22:58.144 "allow_accel_sequence": false, 00:22:58.144 "rdma_max_cq_size": 0, 00:22:58.144 "rdma_cm_event_timeout_ms": 0, 00:22:58.144 "dhchap_digests": [ 00:22:58.144 "sha256", 00:22:58.144 "sha384", 00:22:58.144 "sha512" 00:22:58.144 ], 00:22:58.144 "dhchap_dhgroups": [ 00:22:58.144 "null", 00:22:58.144 "ffdhe2048", 00:22:58.144 "ffdhe3072", 00:22:58.144 "ffdhe4096", 00:22:58.144 "ffdhe6144", 00:22:58.144 "ffdhe8192" 00:22:58.144 ] 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_nvme_set_hotplug", 00:22:58.144 "params": { 00:22:58.144 "period_us": 100000, 00:22:58.144 "enable": false 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_malloc_create", 00:22:58.144 "params": { 00:22:58.144 "name": "malloc0", 00:22:58.144 "num_blocks": 8192, 00:22:58.144 "block_size": 4096, 00:22:58.144 "physical_block_size": 4096, 00:22:58.144 "uuid": "164dc020-aaa6-426c-8b25-8a718669b45e", 00:22:58.144 "optimal_io_boundary": 0 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "bdev_wait_for_examine" 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "nbd", 00:22:58.144 "config": [] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "scheduler", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "framework_set_scheduler", 00:22:58.144 "params": { 00:22:58.144 "name": "static" 00:22:58.144 } 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "nvmf", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "nvmf_set_config", 00:22:58.144 "params": { 00:22:58.144 "discovery_filter": "match_any", 00:22:58.144 "admin_cmd_passthru": { 00:22:58.144 "identify_ctrlr": false 00:22:58.144 } 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_set_max_subsystems", 00:22:58.144 "params": { 00:22:58.144 "max_subsystems": 1024 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_set_crdt", 00:22:58.144 "params": { 00:22:58.144 "crdt1": 0, 00:22:58.144 "crdt2": 0, 00:22:58.144 "crdt3": 0 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_create_transport", 00:22:58.144 "params": { 00:22:58.144 "trtype": "TCP", 00:22:58.144 "max_queue_depth": 128, 00:22:58.144 "max_io_qpairs_per_ctrlr": 127, 00:22:58.144 "in_capsule_data_size": 4096, 00:22:58.144 "max_io_size": 131072, 00:22:58.144 "io_unit_size": 131072, 00:22:58.144 "max_aq_depth": 128, 00:22:58.144 "num_shared_buffers": 511, 00:22:58.144 "buf_cache_size": 4294967295, 00:22:58.144 "dif_insert_or_strip": false, 00:22:58.144 "zcopy": false, 00:22:58.144 "c2h_success": false, 00:22:58.144 "sock_priority": 0, 00:22:58.144 "abort_timeout_sec": 1, 00:22:58.144 "ack_timeout": 0, 00:22:58.144 "data_wr_pool_size": 0 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_create_subsystem", 00:22:58.144 "params": { 00:22:58.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.144 "allow_any_host": false, 00:22:58.144 "serial_number": "00000000000000000000", 00:22:58.144 "model_number": "SPDK bdev Controller", 00:22:58.144 "max_namespaces": 32, 00:22:58.144 "min_cntlid": 1, 00:22:58.144 "max_cntlid": 65519, 00:22:58.144 "ana_reporting": false 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_subsystem_add_host", 00:22:58.144 "params": { 00:22:58.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.144 "host": "nqn.2016-06.io.spdk:host1", 00:22:58.144 "psk": "key0" 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_subsystem_add_ns", 00:22:58.144 "params": { 00:22:58.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.144 "namespace": { 00:22:58.144 "nsid": 1, 00:22:58.144 "bdev_name": "malloc0", 00:22:58.144 "nguid": "164DC020AAA6426C8B258A718669B45E", 00:22:58.144 "uuid": "164dc020-aaa6-426c-8b25-8a718669b45e", 00:22:58.144 "no_auto_visible": false 00:22:58.144 } 00:22:58.144 } 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "method": "nvmf_subsystem_add_listener", 00:22:58.144 "params": { 00:22:58.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.144 "listen_address": { 00:22:58.144 "trtype": "TCP", 00:22:58.144 "adrfam": "IPv4", 00:22:58.144 "traddr": "10.0.0.2", 00:22:58.144 "trsvcid": "4420" 00:22:58.144 }, 00:22:58.144 "secure_channel": true 00:22:58.144 } 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }' 00:22:58.144 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:58.402 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:58.402 "subsystems": [ 00:22:58.402 { 00:22:58.402 "subsystem": "keyring", 00:22:58.402 "config": [ 00:22:58.402 { 00:22:58.402 "method": "keyring_file_add_key", 00:22:58.402 "params": { 00:22:58.402 "name": "key0", 00:22:58.402 "path": "/tmp/tmp.IUaNKgUjwc" 00:22:58.402 } 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "subsystem": "iobuf", 00:22:58.402 "config": [ 00:22:58.402 { 00:22:58.402 "method": "iobuf_set_options", 00:22:58.402 "params": { 00:22:58.402 "small_pool_count": 8192, 00:22:58.402 "large_pool_count": 1024, 00:22:58.402 "small_bufsize": 8192, 00:22:58.402 "large_bufsize": 135168 00:22:58.402 } 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "subsystem": "sock", 00:22:58.402 "config": [ 00:22:58.402 { 00:22:58.402 "method": "sock_set_default_impl", 00:22:58.402 "params": { 00:22:58.402 "impl_name": "posix" 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "sock_impl_set_options", 00:22:58.402 "params": { 00:22:58.402 "impl_name": "ssl", 00:22:58.402 "recv_buf_size": 4096, 00:22:58.402 "send_buf_size": 4096, 00:22:58.402 "enable_recv_pipe": true, 00:22:58.402 "enable_quickack": false, 00:22:58.402 "enable_placement_id": 0, 00:22:58.402 "enable_zerocopy_send_server": true, 00:22:58.402 "enable_zerocopy_send_client": false, 00:22:58.402 "zerocopy_threshold": 0, 00:22:58.402 "tls_version": 0, 00:22:58.402 "enable_ktls": false 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "sock_impl_set_options", 00:22:58.402 "params": { 00:22:58.402 "impl_name": "posix", 00:22:58.402 "recv_buf_size": 2097152, 00:22:58.402 "send_buf_size": 2097152, 00:22:58.402 "enable_recv_pipe": true, 00:22:58.402 "enable_quickack": false, 00:22:58.402 "enable_placement_id": 0, 00:22:58.402 "enable_zerocopy_send_server": true, 00:22:58.402 "enable_zerocopy_send_client": false, 00:22:58.402 "zerocopy_threshold": 0, 00:22:58.402 "tls_version": 0, 00:22:58.402 "enable_ktls": false 00:22:58.402 } 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "subsystem": "vmd", 00:22:58.402 "config": [] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "subsystem": "accel", 00:22:58.402 "config": [ 00:22:58.402 { 00:22:58.402 "method": "accel_set_options", 00:22:58.402 "params": { 00:22:58.402 "small_cache_size": 128, 00:22:58.402 "large_cache_size": 16, 00:22:58.402 "task_count": 2048, 00:22:58.402 "sequence_count": 2048, 00:22:58.402 "buf_count": 2048 00:22:58.402 } 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "subsystem": "bdev", 00:22:58.402 "config": [ 00:22:58.402 { 00:22:58.402 "method": "bdev_set_options", 00:22:58.402 "params": { 00:22:58.402 "bdev_io_pool_size": 65535, 00:22:58.402 "bdev_io_cache_size": 256, 00:22:58.402 "bdev_auto_examine": true, 00:22:58.402 "iobuf_small_cache_size": 128, 00:22:58.402 "iobuf_large_cache_size": 16 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "bdev_raid_set_options", 00:22:58.402 "params": { 00:22:58.402 "process_window_size_kb": 1024 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "bdev_iscsi_set_options", 00:22:58.402 "params": { 00:22:58.402 "timeout_sec": 30 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "bdev_nvme_set_options", 00:22:58.402 "params": { 00:22:58.402 "action_on_timeout": "none", 00:22:58.402 "timeout_us": 0, 00:22:58.402 "timeout_admin_us": 0, 00:22:58.402 "keep_alive_timeout_ms": 10000, 00:22:58.402 "arbitration_burst": 0, 00:22:58.402 "low_priority_weight": 0, 00:22:58.402 "medium_priority_weight": 0, 00:22:58.402 "high_priority_weight": 0, 00:22:58.402 "nvme_adminq_poll_period_us": 10000, 00:22:58.402 "nvme_ioq_poll_period_us": 0, 00:22:58.402 "io_queue_requests": 512, 00:22:58.402 "delay_cmd_submit": true, 00:22:58.402 "transport_retry_count": 4, 00:22:58.402 "bdev_retry_count": 3, 00:22:58.402 "transport_ack_timeout": 0, 00:22:58.402 "ctrlr_loss_timeout_sec": 0, 00:22:58.402 "reconnect_delay_sec": 0, 00:22:58.402 "fast_io_fail_timeout_sec": 0, 00:22:58.402 "disable_auto_failback": false, 00:22:58.402 "generate_uuids": false, 00:22:58.402 "transport_tos": 0, 00:22:58.402 "nvme_error_stat": false, 00:22:58.402 "rdma_srq_size": 0, 00:22:58.402 "io_path_stat": false, 00:22:58.402 "allow_accel_sequence": false, 00:22:58.402 "rdma_max_cq_size": 0, 00:22:58.402 "rdma_cm_event_timeout_ms": 0, 00:22:58.402 "dhchap_digests": [ 00:22:58.402 "sha256", 00:22:58.402 "sha384", 00:22:58.402 "sha512" 00:22:58.402 ], 00:22:58.402 "dhchap_dhgroups": [ 00:22:58.402 "null", 00:22:58.402 "ffdhe2048", 00:22:58.402 "ffdhe3072", 00:22:58.402 "ffdhe4096", 00:22:58.402 "ffdhe6144", 00:22:58.402 "ffdhe8192" 00:22:58.402 ] 00:22:58.402 } 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "method": "bdev_nvme_attach_controller", 00:22:58.403 "params": { 00:22:58.403 "name": "nvme0", 00:22:58.403 "trtype": "TCP", 00:22:58.403 "adrfam": "IPv4", 00:22:58.403 "traddr": "10.0.0.2", 00:22:58.403 "trsvcid": "4420", 00:22:58.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.403 "prchk_reftag": false, 00:22:58.403 "prchk_guard": false, 00:22:58.403 "ctrlr_loss_timeout_sec": 0, 00:22:58.403 "reconnect_delay_sec": 0, 00:22:58.403 "fast_io_fail_timeout_sec": 0, 00:22:58.403 "psk": "key0", 00:22:58.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.403 "hdgst": false, 00:22:58.403 "ddgst": false 00:22:58.403 } 00:22:58.403 }, 00:22:58.403 { 00:22:58.403 "method": "bdev_nvme_set_hotplug", 00:22:58.403 "params": { 00:22:58.403 "period_us": 100000, 00:22:58.403 "enable": false 00:22:58.403 } 00:22:58.403 }, 00:22:58.403 { 00:22:58.403 "method": "bdev_enable_histogram", 00:22:58.403 "params": { 00:22:58.403 "name": "nvme0n1", 00:22:58.403 "enable": true 00:22:58.403 } 00:22:58.403 }, 00:22:58.403 { 00:22:58.403 "method": "bdev_wait_for_examine" 00:22:58.403 } 00:22:58.403 ] 00:22:58.403 }, 00:22:58.403 { 00:22:58.403 "subsystem": "nbd", 00:22:58.403 "config": [] 00:22:58.403 } 00:22:58.403 ] 00:22:58.403 }' 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3618682 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3618682 ']' 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3618682 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3618682 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3618682' 00:22:58.403 killing process with pid 3618682 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3618682 00:22:58.403 Received shutdown signal, test time was about 1.000000 seconds 00:22:58.403 00:22:58.403 Latency(us) 00:22:58.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.403 =================================================================================================================== 00:22:58.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.403 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3618682 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3618654 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3618654 ']' 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3618654 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.660 13:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3618654 00:22:58.660 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.660 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.660 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3618654' 00:22:58.660 killing process with pid 3618654 00:22:58.660 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3618654 00:22:58.660 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3618654 00:22:58.918 13:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:58.918 13:30:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.918 13:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:58.918 "subsystems": [ 00:22:58.918 { 00:22:58.918 "subsystem": "keyring", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "keyring_file_add_key", 00:22:58.918 "params": { 00:22:58.918 "name": "key0", 00:22:58.918 "path": "/tmp/tmp.IUaNKgUjwc" 00:22:58.918 } 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "iobuf", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "iobuf_set_options", 00:22:58.918 "params": { 00:22:58.918 "small_pool_count": 8192, 00:22:58.918 "large_pool_count": 1024, 00:22:58.918 "small_bufsize": 8192, 00:22:58.918 "large_bufsize": 135168 00:22:58.918 } 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "sock", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "sock_set_default_impl", 00:22:58.918 "params": { 00:22:58.918 "impl_name": "posix" 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "sock_impl_set_options", 00:22:58.918 "params": { 00:22:58.918 "impl_name": "ssl", 00:22:58.918 "recv_buf_size": 4096, 00:22:58.918 "send_buf_size": 4096, 00:22:58.918 "enable_recv_pipe": true, 00:22:58.918 "enable_quickack": false, 00:22:58.918 "enable_placement_id": 0, 00:22:58.918 "enable_zerocopy_send_server": true, 00:22:58.918 "enable_zerocopy_send_client": false, 00:22:58.918 "zerocopy_threshold": 0, 00:22:58.918 "tls_version": 0, 00:22:58.918 "enable_ktls": false 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "sock_impl_set_options", 00:22:58.918 "params": { 00:22:58.918 "impl_name": "posix", 00:22:58.918 "recv_buf_size": 2097152, 00:22:58.918 "send_buf_size": 2097152, 00:22:58.918 "enable_recv_pipe": true, 00:22:58.918 "enable_quickack": false, 00:22:58.918 "enable_placement_id": 0, 00:22:58.918 "enable_zerocopy_send_server": true, 00:22:58.918 "enable_zerocopy_send_client": false, 00:22:58.918 "zerocopy_threshold": 0, 00:22:58.918 "tls_version": 0, 00:22:58.918 "enable_ktls": false 00:22:58.918 } 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "vmd", 00:22:58.918 "config": [] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "accel", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "accel_set_options", 00:22:58.918 "params": { 00:22:58.918 "small_cache_size": 128, 00:22:58.918 "large_cache_size": 16, 00:22:58.918 "task_count": 2048, 00:22:58.918 "sequence_count": 2048, 00:22:58.918 "buf_count": 2048 00:22:58.918 } 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "bdev", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "bdev_set_options", 00:22:58.918 "params": { 00:22:58.918 "bdev_io_pool_size": 65535, 00:22:58.918 "bdev_io_cache_size": 256, 00:22:58.918 "bdev_auto_examine": true, 00:22:58.918 "iobuf_small_cache_size": 128, 00:22:58.918 "iobuf_large_cache_size": 16 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_raid_set_options", 00:22:58.918 "params": { 00:22:58.918 "process_window_size_kb": 1024 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_iscsi_set_options", 00:22:58.918 "params": { 00:22:58.918 "timeout_sec": 30 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_nvme_set_options", 00:22:58.918 "params": { 00:22:58.918 "action_on_timeout": "none", 00:22:58.918 "timeout_us": 0, 00:22:58.918 "timeout_admin_us": 0, 00:22:58.918 "keep_alive_timeout_ms": 10000, 00:22:58.918 "arbitration_burst": 0, 00:22:58.918 "low_priority_weight": 0, 00:22:58.918 "medium_priority_weight": 0, 00:22:58.918 "high_priority_weight": 0, 00:22:58.918 "nvme_adminq_poll_period_us": 10000, 00:22:58.918 "nvme_ioq_poll_period_us": 0, 00:22:58.918 "io_queue_requests": 0, 00:22:58.918 "delay_cmd_submit": true, 00:22:58.918 "transport_retry_count": 4, 00:22:58.918 "bdev_retry_count": 3, 00:22:58.918 "transport_ack_timeout": 0, 00:22:58.918 "ctrlr_loss_timeout_sec": 0, 00:22:58.918 "reconnect_delay_sec": 0, 00:22:58.918 "fast_io_fail_timeout_sec": 0, 00:22:58.918 "disable_auto_failback": false, 00:22:58.918 "generate_uuids": false, 00:22:58.918 "transport_tos": 0, 00:22:58.918 "nvme_error_stat": false, 00:22:58.918 "rdma_srq_size": 0, 00:22:58.918 "io_path_stat": false, 00:22:58.918 "allow_accel_sequence": false, 00:22:58.918 "rdma_max_cq_size": 0, 00:22:58.918 "rdma_cm_event_timeout_ms": 0, 00:22:58.918 "dhchap_digests": [ 00:22:58.918 "sha256", 00:22:58.918 "sha384", 00:22:58.918 "sha512" 00:22:58.918 ], 00:22:58.918 "dhchap_dhgroups": [ 00:22:58.918 "null", 00:22:58.918 "ffdhe2048", 00:22:58.918 "ffdhe3072", 00:22:58.918 "ffdhe4096", 00:22:58.918 "ffdhe6144", 00:22:58.918 "ffdhe8192" 00:22:58.918 ] 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_nvme_set_hotplug", 00:22:58.918 "params": { 00:22:58.918 "period_us": 100000, 00:22:58.918 "enable": false 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_malloc_create", 00:22:58.918 "params": { 00:22:58.918 "name": "malloc0", 00:22:58.918 "num_blocks": 8192, 00:22:58.918 "block_size": 4096, 00:22:58.918 "physical_block_size": 4096, 00:22:58.918 "uuid": "164dc020-aaa6-426c-8b25-8a718669b45e", 00:22:58.918 "optimal_io_boundary": 0 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "bdev_wait_for_examine" 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "nbd", 00:22:58.918 "config": [] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "scheduler", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "framework_set_scheduler", 00:22:58.918 "params": { 00:22:58.918 "name": "static" 00:22:58.918 } 00:22:58.918 } 00:22:58.918 ] 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "subsystem": "nvmf", 00:22:58.918 "config": [ 00:22:58.918 { 00:22:58.918 "method": "nvmf_set_config", 00:22:58.918 "params": { 00:22:58.918 "discovery_filter": "match_any", 00:22:58.918 "admin_cmd_passthru": { 00:22:58.918 "identify_ctrlr": false 00:22:58.918 } 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "nvmf_set_max_subsystems", 00:22:58.918 "params": { 00:22:58.918 "max_subsystems": 1024 00:22:58.918 } 00:22:58.918 }, 00:22:58.918 { 00:22:58.918 "method": "nvmf_set_crdt", 00:22:58.918 "params": { 00:22:58.918 "crdt1": 0, 00:22:58.918 "crdt2": 0, 00:22:58.919 "crdt3": 0 00:22:58.919 } 00:22:58.919 }, 00:22:58.919 { 00:22:58.919 "method": "nvmf_create_transport", 00:22:58.919 "params": { 00:22:58.919 "trtype": "TCP", 00:22:58.919 "max_queue_depth": 128, 00:22:58.919 "max_io_qpairs_per_ctrlr": 127, 00:22:58.919 "in_capsule_data_size": 4096, 00:22:58.919 "max_io_size": 131072, 00:22:58.919 "io_unit_size": 131072, 00:22:58.919 "max_aq_depth": 128, 00:22:58.919 "num_shared_buffers": 511, 00:22:58.919 "buf_cache_size": 4294967295, 00:22:58.919 "dif_insert_or_strip": false, 00:22:58.919 "zcopy": false, 00:22:58.919 "c2h_success": false, 00:22:58.919 "sock_priority": 0, 00:22:58.919 "abort_timeout_sec": 1, 00:22:58.919 "ack_timeout": 0, 00:22:58.919 "data_wr_pool_size": 0 00:22:58.919 } 00:22:58.919 }, 00:22:58.919 { 00:22:58.919 "method": "nvmf_create_subsystem", 00:22:58.919 "params": { 00:22:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.919 "allow_any_host": false, 00:22:58.919 "serial_number": "00000000000000000000", 00:22:58.919 "model_number": "SPDK bdev Controller", 00:22:58.919 "max_namespaces": 32, 00:22:58.919 "min_cntlid": 1, 00:22:58.919 "max_cntlid": 65519, 00:22:58.919 "ana_reporting": false 00:22:58.919 } 00:22:58.919 }, 00:22:58.919 { 00:22:58.919 "method": "nvmf_subsystem_add_host", 00:22:58.919 "params": { 00:22:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.919 "host": "nqn.2016-06.io.spdk:host1", 00:22:58.919 "psk": "key0" 00:22:58.919 } 00:22:58.919 }, 00:22:58.919 { 00:22:58.919 "method": "nvmf_subsystem_add_ns", 00:22:58.919 "params": { 00:22:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.919 "namespace": { 00:22:58.919 "nsid": 1, 00:22:58.919 "bdev_name": "malloc0", 00:22:58.919 "nguid": "164DC020AAA6426C8B258A718669B45E", 00:22:58.919 "uuid": "164dc020-aaa6-426c-8b25-8a718669b45e", 00:22:58.919 "no_auto_visible": false 00:22:58.919 } 00:22:58.919 } 00:22:58.919 }, 00:22:58.919 { 00:22:58.919 "method": "nvmf_subsystem_add_listener", 00:22:58.919 "params": { 00:22:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.919 "listen_address": { 00:22:58.919 "trtype": "TCP", 00:22:58.919 "adrfam": "IPv4", 00:22:58.919 "traddr": "10.0.0.2", 00:22:58.919 "trsvcid": "4420" 00:22:58.919 }, 00:22:58.919 "secure_channel": true 00:22:58.919 } 00:22:58.919 } 00:22:58.919 ] 00:22:58.919 } 00:22:58.919 ] 00:22:58.919 }' 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3619088 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3619088 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3619088 ']' 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.919 13:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.919 [2024-07-12 13:30:56.297472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:58.919 [2024-07-12 13:30:56.297564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.919 [2024-07-12 13:30:56.333612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.919 [2024-07-12 13:30:56.360406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.176 [2024-07-12 13:30:56.438009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.176 [2024-07-12 13:30:56.438064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.176 [2024-07-12 13:30:56.438091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.176 [2024-07-12 13:30:56.438101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.176 [2024-07-12 13:30:56.438111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.176 [2024-07-12 13:30:56.438184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.432 [2024-07-12 13:30:56.670209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.432 [2024-07-12 13:30:56.702220] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.432 [2024-07-12 13:30:56.713501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3619239 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3619239 /var/tmp/bdevperf.sock 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3619239 ']' 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.997 13:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:59.997 "subsystems": [ 00:22:59.997 { 00:22:59.997 "subsystem": "keyring", 00:22:59.997 "config": [ 00:22:59.997 { 00:22:59.997 "method": "keyring_file_add_key", 00:22:59.997 "params": { 00:22:59.997 "name": "key0", 00:22:59.997 "path": "/tmp/tmp.IUaNKgUjwc" 00:22:59.997 } 00:22:59.997 } 00:22:59.997 ] 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "subsystem": "iobuf", 00:22:59.997 "config": [ 00:22:59.997 { 00:22:59.997 "method": "iobuf_set_options", 00:22:59.997 "params": { 00:22:59.997 "small_pool_count": 8192, 00:22:59.997 "large_pool_count": 1024, 00:22:59.997 "small_bufsize": 8192, 00:22:59.997 "large_bufsize": 135168 00:22:59.997 } 00:22:59.997 } 00:22:59.997 ] 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "subsystem": "sock", 00:22:59.997 "config": [ 00:22:59.997 { 00:22:59.997 "method": "sock_set_default_impl", 00:22:59.997 "params": { 00:22:59.997 "impl_name": "posix" 00:22:59.997 } 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "method": "sock_impl_set_options", 00:22:59.997 "params": { 00:22:59.997 "impl_name": "ssl", 00:22:59.997 "recv_buf_size": 4096, 00:22:59.997 "send_buf_size": 4096, 00:22:59.997 "enable_recv_pipe": true, 00:22:59.997 "enable_quickack": false, 00:22:59.997 "enable_placement_id": 0, 00:22:59.997 "enable_zerocopy_send_server": true, 00:22:59.997 "enable_zerocopy_send_client": false, 00:22:59.997 "zerocopy_threshold": 0, 00:22:59.997 "tls_version": 0, 00:22:59.997 "enable_ktls": false 00:22:59.997 } 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "method": "sock_impl_set_options", 00:22:59.997 "params": { 00:22:59.997 "impl_name": "posix", 00:22:59.997 "recv_buf_size": 2097152, 00:22:59.997 "send_buf_size": 2097152, 00:22:59.997 "enable_recv_pipe": true, 00:22:59.997 "enable_quickack": false, 00:22:59.997 "enable_placement_id": 0, 00:22:59.997 "enable_zerocopy_send_server": true, 00:22:59.997 "enable_zerocopy_send_client": false, 00:22:59.997 "zerocopy_threshold": 0, 00:22:59.997 "tls_version": 0, 00:22:59.997 "enable_ktls": false 00:22:59.997 } 00:22:59.997 } 00:22:59.997 ] 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "subsystem": "vmd", 00:22:59.997 "config": [] 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "subsystem": "accel", 00:22:59.997 "config": [ 00:22:59.997 { 00:22:59.997 "method": "accel_set_options", 00:22:59.997 "params": { 00:22:59.997 "small_cache_size": 128, 00:22:59.997 "large_cache_size": 16, 00:22:59.997 "task_count": 2048, 00:22:59.997 "sequence_count": 2048, 00:22:59.997 "buf_count": 2048 00:22:59.997 } 00:22:59.997 } 00:22:59.997 ] 00:22:59.997 }, 00:22:59.997 { 00:22:59.997 "subsystem": "bdev", 00:22:59.997 "config": [ 00:22:59.997 { 00:22:59.998 "method": "bdev_set_options", 00:22:59.998 "params": { 00:22:59.998 "bdev_io_pool_size": 65535, 00:22:59.998 "bdev_io_cache_size": 256, 00:22:59.998 "bdev_auto_examine": true, 00:22:59.998 "iobuf_small_cache_size": 128, 00:22:59.998 "iobuf_large_cache_size": 16 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_raid_set_options", 00:22:59.998 "params": { 00:22:59.998 "process_window_size_kb": 1024 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_iscsi_set_options", 00:22:59.998 "params": { 00:22:59.998 "timeout_sec": 30 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_nvme_set_options", 00:22:59.998 "params": { 00:22:59.998 "action_on_timeout": "none", 00:22:59.998 "timeout_us": 0, 00:22:59.998 "timeout_admin_us": 0, 00:22:59.998 "keep_alive_timeout_ms": 10000, 00:22:59.998 "arbitration_burst": 0, 00:22:59.998 "low_priority_weight": 0, 00:22:59.998 "medium_priority_weight": 0, 00:22:59.998 "high_priority_weight": 0, 00:22:59.998 "nvme_adminq_poll_period_us": 10000, 00:22:59.998 "nvme_ioq_poll_period_us": 0, 00:22:59.998 "io_queue_requests": 512, 00:22:59.998 "delay_cmd_submit": true, 00:22:59.998 "transport_retry_count": 4, 00:22:59.998 "bdev_retry_count": 3, 00:22:59.998 "transport_ack_timeout": 0, 00:22:59.998 "ctrlr_loss_timeout_sec": 0, 00:22:59.998 "reconnect_delay_sec": 0, 00:22:59.998 "fast_io_fail_timeout_sec": 0, 00:22:59.998 "disable_auto_failback": false, 00:22:59.998 "generate_uuids": false, 00:22:59.998 "transport_tos": 0, 00:22:59.998 "nvme_error_stat": false, 00:22:59.998 "rdma_srq_size": 0, 00:22:59.998 "io_path_stat": false, 00:22:59.998 "allow_accel_sequence": false, 00:22:59.998 "rdma_max_cq_size": 0, 00:22:59.998 "rdma_cm_event_timeout_ms": 0, 00:22:59.998 "dhchap_digests": [ 00:22:59.998 "sha256", 00:22:59.998 "sha384", 00:22:59.998 "sha512" 00:22:59.998 ], 00:22:59.998 "dhchap_dhgroups": [ 00:22:59.998 "null", 00:22:59.998 "ffdhe2048", 00:22:59.998 "ffdhe3072", 00:22:59.998 "ffdhe4096", 00:22:59.998 "ffdhe6144", 00:22:59.998 "ffdhe8192" 00:22:59.998 ] 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_nvme_attach_controller", 00:22:59.998 "params": { 00:22:59.998 "name": "nvme0", 00:22:59.998 "trtype": "TCP", 00:22:59.998 "adrfam": "IPv4", 00:22:59.998 "traddr": "10.0.0.2", 00:22:59.998 "trsvcid": "4420", 00:22:59.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.998 "prchk_reftag": false, 00:22:59.998 "prchk_guard": false, 00:22:59.998 "ctrlr_loss_timeout_sec": 0, 00:22:59.998 "reconnect_delay_sec": 0, 00:22:59.998 "fast_io_fail_timeout_sec": 0, 00:22:59.998 "psk": "key0", 00:22:59.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.998 "hdgst": false, 00:22:59.998 "ddgst": false 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_nvme_set_hotplug", 00:22:59.998 "params": { 00:22:59.998 "period_us": 100000, 00:22:59.998 "enable": false 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_enable_histogram", 00:22:59.998 "params": { 00:22:59.998 "name": "nvme0n1", 00:22:59.998 "enable": true 00:22:59.998 } 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "method": "bdev_wait_for_examine" 00:22:59.998 } 00:22:59.998 ] 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "subsystem": "nbd", 00:22:59.998 "config": [] 00:22:59.998 } 00:22:59.998 ] 00:22:59.998 }' 00:22:59.998 13:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.998 [2024-07-12 13:30:57.285051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:22:59.998 [2024-07-12 13:30:57.285125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619239 ] 00:22:59.998 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.998 [2024-07-12 13:30:57.316362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.998 [2024-07-12 13:30:57.343713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.998 [2024-07-12 13:30:57.428975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.256 [2024-07-12 13:30:57.594438] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.820 13:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.820 13:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:00.820 13:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.820 13:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:01.077 13:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.077 13:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.333 Running I/O for 1 seconds... 00:23:02.264 00:23:02.264 Latency(us) 00:23:02.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.264 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:02.264 Verification LBA range: start 0x0 length 0x2000 00:23:02.264 nvme0n1 : 1.04 2594.78 10.14 0.00 0.00 48404.32 10971.21 79614.10 00:23:02.264 =================================================================================================================== 00:23:02.264 Total : 2594.78 10.14 0.00 0.00 48404.32 10971.21 79614.10 00:23:02.264 0 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:02.264 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:02.264 nvmf_trace.0 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3619239 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3619239 ']' 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3619239 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3619239 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3619239' 00:23:02.522 killing process with pid 3619239 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3619239 00:23:02.522 Received shutdown signal, test time was about 1.000000 seconds 00:23:02.522 00:23:02.522 Latency(us) 00:23:02.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.522 =================================================================================================================== 00:23:02.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.522 13:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3619239 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.779 rmmod nvme_tcp 00:23:02.779 rmmod nvme_fabrics 00:23:02.779 rmmod nvme_keyring 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3619088 ']' 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3619088 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3619088 ']' 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3619088 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3619088 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3619088' 00:23:02.779 killing process with pid 3619088 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3619088 00:23:02.779 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3619088 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.037 13:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.938 13:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.938 13:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.x4QY0gb5bw /tmp/tmp.O54LtBrqV4 /tmp/tmp.IUaNKgUjwc 00:23:04.938 00:23:04.938 real 1m19.213s 00:23:04.938 user 1m59.347s 00:23:04.938 sys 0m29.177s 00:23:04.938 13:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.938 13:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.938 ************************************ 00:23:04.938 END TEST nvmf_tls 00:23:04.938 ************************************ 00:23:05.195 13:31:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:05.195 13:31:02 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:05.195 13:31:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.195 13:31:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.195 13:31:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.195 ************************************ 00:23:05.195 START TEST nvmf_fips 00:23:05.195 ************************************ 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:05.195 * Looking for test storage... 00:23:05.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:05.195 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:05.196 Error setting digest 00:23:05.196 0052FEB91F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:05.196 0052FEB91F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.196 13:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.723 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:07.724 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:07.724 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:07.724 Found net devices under 0000:09:00.0: cvl_0_0 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:07.724 Found net devices under 0000:09:00.1: cvl_0_1 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:23:07.724 00:23:07.724 --- 10.0.0.2 ping statistics --- 00:23:07.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.724 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:07.724 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:23:07.724 00:23:07.724 --- 10.0.0.1 ping statistics --- 00:23:07.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.725 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3621563 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3621563 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3621563 ']' 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.725 13:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.725 [2024-07-12 13:31:04.932471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:23:07.725 [2024-07-12 13:31:04.932553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.725 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.725 [2024-07-12 13:31:04.971063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:07.725 [2024-07-12 13:31:04.997127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.725 [2024-07-12 13:31:05.084679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.725 [2024-07-12 13:31:05.084731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.725 [2024-07-12 13:31:05.084744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.725 [2024-07-12 13:31:05.084755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.725 [2024-07-12 13:31:05.084764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.725 [2024-07-12 13:31:05.084790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.725 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.725 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:07.725 13:31:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.725 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.725 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.981 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:08.238 [2024-07-12 13:31:05.496418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.238 [2024-07-12 13:31:05.512416] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.238 [2024-07-12 13:31:05.512648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.238 [2024-07-12 13:31:05.543643] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:08.238 malloc0 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3621621 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3621621 /var/tmp/bdevperf.sock 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3621621 ']' 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.238 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:08.238 [2024-07-12 13:31:05.635084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:23:08.238 [2024-07-12 13:31:05.635162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621621 ] 00:23:08.238 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.238 [2024-07-12 13:31:05.666378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:08.238 [2024-07-12 13:31:05.693006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.495 [2024-07-12 13:31:05.781233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.495 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.495 13:31:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:08.495 13:31:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:08.751 [2024-07-12 13:31:06.120292] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.751 [2024-07-12 13:31:06.120448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.751 TLSTESTn1 00:23:08.751 13:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.007 Running I/O for 10 seconds... 00:23:19.021 00:23:19.021 Latency(us) 00:23:19.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:19.021 Verification LBA range: start 0x0 length 0x2000 00:23:19.021 TLSTESTn1 : 10.06 1934.56 7.56 0.00 0.00 65971.51 7087.60 90099.86 00:23:19.021 =================================================================================================================== 00:23:19.021 Total : 1934.56 7.56 0.00 0.00 65971.51 7087.60 90099.86 00:23:19.021 0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:19.021 nvmf_trace.0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3621621 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3621621 ']' 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3621621 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.021 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3621621 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3621621' 00:23:19.279 killing process with pid 3621621 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3621621 00:23:19.279 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.279 00:23:19.279 Latency(us) 00:23:19.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.279 =================================================================================================================== 00:23:19.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.279 [2024-07-12 13:31:16.504454] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3621621 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.279 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.279 rmmod nvme_tcp 00:23:19.537 rmmod nvme_fabrics 00:23:19.537 rmmod nvme_keyring 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3621563 ']' 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3621563 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3621563 ']' 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3621563 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3621563 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3621563' 00:23:19.537 killing process with pid 3621563 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3621563 00:23:19.537 [2024-07-12 13:31:16.820459] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:19.537 13:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3621563 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.794 13:31:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.697 13:31:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.697 13:31:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:21.697 00:23:21.697 real 0m16.669s 00:23:21.697 user 0m20.642s 00:23:21.697 sys 0m6.360s 00:23:21.697 13:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:21.697 13:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:21.697 ************************************ 00:23:21.697 END TEST nvmf_fips 00:23:21.697 ************************************ 00:23:21.697 13:31:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:21.697 13:31:19 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:21.697 13:31:19 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.697 13:31:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:21.697 13:31:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.697 13:31:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.697 ************************************ 00:23:21.697 START TEST nvmf_fuzz 00:23:21.697 ************************************ 00:23:21.697 13:31:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.955 * Looking for test storage... 00:23:21.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.955 13:31:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:23.857 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:23.857 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:23.857 Found net devices under 0000:09:00.0: cvl_0_0 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:23.857 Found net devices under 0000:09:00.1: cvl_0_1 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:23.857 00:23:23.857 --- 10.0.0.2 ping statistics --- 00:23:23.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.857 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:23:23.857 00:23:23.857 --- 10.0.0.1 ping statistics --- 00:23:23.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.857 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.857 13:31:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3624874 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3624874 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3624874 ']' 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.116 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 Malloc0 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:24.374 13:31:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:56.430 Fuzzing completed. Shutting down the fuzz application 00:23:56.430 00:23:56.430 Dumping successful admin opcodes: 00:23:56.430 8, 9, 10, 24, 00:23:56.430 Dumping successful io opcodes: 00:23:56.430 0, 9, 00:23:56.430 NS: 0x200003aeff00 I/O qp, Total commands completed: 515508, total successful commands: 2982, random_seed: 1528170304 00:23:56.430 NS: 0x200003aeff00 admin qp, Total commands completed: 63232, total successful commands: 497, random_seed: 1701635008 00:23:56.430 13:31:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:56.430 Fuzzing completed. Shutting down the fuzz application 00:23:56.430 00:23:56.430 Dumping successful admin opcodes: 00:23:56.430 24, 00:23:56.430 Dumping successful io opcodes: 00:23:56.430 00:23:56.430 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1706833261 00:23:56.430 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1706954863 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.430 rmmod nvme_tcp 00:23:56.430 rmmod nvme_fabrics 00:23:56.430 rmmod nvme_keyring 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3624874 ']' 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3624874 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3624874 ']' 00:23:56.430 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 3624874 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3624874 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3624874' 00:23:56.431 killing process with pid 3624874 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 3624874 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 3624874 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.431 13:31:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.334 13:31:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.334 13:31:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:58.592 00:23:58.592 real 0m36.665s 00:23:58.592 user 0m51.150s 00:23:58.592 sys 0m14.483s 00:23:58.592 13:31:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.592 13:31:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:58.592 ************************************ 00:23:58.592 END TEST nvmf_fuzz 00:23:58.592 ************************************ 00:23:58.592 13:31:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.592 13:31:55 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.592 13:31:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.592 13:31:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.592 13:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.592 ************************************ 00:23:58.592 START TEST nvmf_multiconnection 00:23:58.592 ************************************ 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.592 * Looking for test storage... 00:23:58.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.592 13:31:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.496 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:00.497 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:00.497 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:00.497 Found net devices under 0000:09:00.0: cvl_0_0 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:00.497 Found net devices under 0000:09:00.1: cvl_0_1 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.497 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.756 13:31:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:24:00.756 00:24:00.756 --- 10.0.0.2 ping statistics --- 00:24:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.756 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:00.756 00:24:00.756 --- 10.0.0.1 ping statistics --- 00:24:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.756 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3630488 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3630488 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 3630488 ']' 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.756 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 [2024-07-12 13:31:58.094695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:24:00.756 [2024-07-12 13:31:58.094773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.756 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.756 [2024-07-12 13:31:58.137513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.756 [2024-07-12 13:31:58.163119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.073 [2024-07-12 13:31:58.255785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.073 [2024-07-12 13:31:58.255850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.073 [2024-07-12 13:31:58.255864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.073 [2024-07-12 13:31:58.255875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.073 [2024-07-12 13:31:58.255884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.073 [2024-07-12 13:31:58.256000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.073 [2024-07-12 13:31:58.256060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.073 [2024-07-12 13:31:58.256030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.073 [2024-07-12 13:31:58.256062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 [2024-07-12 13:31:58.410131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 Malloc1 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 [2024-07-12 13:31:58.466460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 Malloc2 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.073 Malloc3 00:24:01.073 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 Malloc4 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 Malloc5 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 Malloc6 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 Malloc7 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.332 Malloc8 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.332 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 Malloc9 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 Malloc10 00:24:01.590 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 Malloc11 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.591 13:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:02.156 13:31:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:02.156 13:31:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:02.156 13:31:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.156 13:31:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:02.156 13:31:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.680 13:32:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:04.937 13:32:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:04.937 13:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:04.937 13:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.937 13:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:04.937 13:32:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:06.830 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:06.830 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:06.830 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:06.830 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:06.830 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:06.831 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:06.831 13:32:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.831 13:32:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:07.394 13:32:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:07.394 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.394 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.394 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.394 13:32:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.917 13:32:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:10.175 13:32:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:10.175 13:32:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:10.175 13:32:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.175 13:32:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:10.175 13:32:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.698 13:32:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:13.261 13:32:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:13.261 13:32:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:13.261 13:32:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.261 13:32:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:13.261 13:32:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.153 13:32:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:16.088 13:32:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:16.088 13:32:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:16.088 13:32:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.088 13:32:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:16.088 13:32:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.990 13:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:18.950 13:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:18.950 13:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:18.950 13:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.950 13:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:18.950 13:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.847 13:32:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:21.416 13:32:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:21.416 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.416 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.416 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:21.416 13:32:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.948 13:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:24.516 13:32:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:24.516 13:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:24.516 13:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.516 13:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:24.516 13:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.419 13:32:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:27.353 13:32:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:27.354 13:32:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:27.354 13:32:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.354 13:32:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:27.354 13:32:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.885 13:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:30.460 13:32:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:30.460 13:32:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.460 13:32:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.460 13:32:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:30.460 13:32:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:32.363 13:32:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:32.363 [global] 00:24:32.363 thread=1 00:24:32.363 invalidate=1 00:24:32.363 rw=read 00:24:32.363 time_based=1 00:24:32.363 runtime=10 00:24:32.364 ioengine=libaio 00:24:32.364 direct=1 00:24:32.364 bs=262144 00:24:32.364 iodepth=64 00:24:32.364 norandommap=1 00:24:32.364 numjobs=1 00:24:32.364 00:24:32.364 [job0] 00:24:32.364 filename=/dev/nvme0n1 00:24:32.364 [job1] 00:24:32.364 filename=/dev/nvme10n1 00:24:32.364 [job2] 00:24:32.364 filename=/dev/nvme1n1 00:24:32.364 [job3] 00:24:32.364 filename=/dev/nvme2n1 00:24:32.364 [job4] 00:24:32.364 filename=/dev/nvme3n1 00:24:32.364 [job5] 00:24:32.364 filename=/dev/nvme4n1 00:24:32.364 [job6] 00:24:32.364 filename=/dev/nvme5n1 00:24:32.364 [job7] 00:24:32.364 filename=/dev/nvme6n1 00:24:32.364 [job8] 00:24:32.364 filename=/dev/nvme7n1 00:24:32.364 [job9] 00:24:32.364 filename=/dev/nvme8n1 00:24:32.364 [job10] 00:24:32.364 filename=/dev/nvme9n1 00:24:32.622 Could not set queue depth (nvme0n1) 00:24:32.622 Could not set queue depth (nvme10n1) 00:24:32.622 Could not set queue depth (nvme1n1) 00:24:32.622 Could not set queue depth (nvme2n1) 00:24:32.622 Could not set queue depth (nvme3n1) 00:24:32.622 Could not set queue depth (nvme4n1) 00:24:32.622 Could not set queue depth (nvme5n1) 00:24:32.622 Could not set queue depth (nvme6n1) 00:24:32.622 Could not set queue depth (nvme7n1) 00:24:32.622 Could not set queue depth (nvme8n1) 00:24:32.622 Could not set queue depth (nvme9n1) 00:24:32.622 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.622 fio-3.35 00:24:32.622 Starting 11 threads 00:24:44.855 00:24:44.855 job0: (groupid=0, jobs=1): err= 0: pid=3634742: Fri Jul 12 13:32:40 2024 00:24:44.855 read: IOPS=764, BW=191MiB/s (200MB/s)(1928MiB/10085msec) 00:24:44.855 slat (usec): min=14, max=93214, avg=1273.04, stdev=3673.93 00:24:44.855 clat (msec): min=6, max=219, avg=82.32, stdev=37.34 00:24:44.855 lat (msec): min=7, max=219, avg=83.60, stdev=37.90 00:24:44.855 clat percentiles (msec): 00:24:44.855 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 54], 00:24:44.855 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 79], 00:24:44.855 | 70.00th=[ 90], 80.00th=[ 110], 90.00th=[ 146], 95.00th=[ 159], 00:24:44.855 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 215], 00:24:44.855 | 99.99th=[ 220] 00:24:44.855 bw ( KiB/s): min=90112, max=293376, per=10.76%, avg=195840.00, stdev=67023.64, samples=20 00:24:44.855 iops : min= 352, max= 1146, avg=765.00, stdev=261.81, samples=20 00:24:44.855 lat (msec) : 10=0.05%, 20=0.60%, 50=15.75%, 100=59.81%, 250=23.79% 00:24:44.855 cpu : usr=0.44%, sys=2.72%, ctx=1504, majf=0, minf=4097 00:24:44.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.855 issued rwts: total=7713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.855 job1: (groupid=0, jobs=1): err= 0: pid=3634743: Fri Jul 12 13:32:40 2024 00:24:44.855 read: IOPS=653, BW=163MiB/s (171MB/s)(1652MiB/10111msec) 00:24:44.855 slat (usec): min=12, max=124006, avg=1214.81, stdev=5118.55 00:24:44.855 clat (msec): min=3, max=303, avg=96.58, stdev=56.66 00:24:44.855 lat (msec): min=3, max=330, avg=97.79, stdev=57.47 00:24:44.855 clat percentiles (msec): 00:24:44.855 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 43], 00:24:44.855 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 91], 60.00th=[ 110], 00:24:44.855 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 201], 00:24:44.855 | 99.00th=[ 236], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 284], 00:24:44.855 | 99.99th=[ 305] 00:24:44.855 bw ( KiB/s): min=98816, max=337920, per=9.21%, avg=167577.60, stdev=60652.83, samples=20 00:24:44.855 iops : min= 386, max= 1320, avg=654.60, stdev=236.93, samples=20 00:24:44.855 lat (msec) : 4=0.06%, 10=1.20%, 20=4.81%, 50=19.06%, 100=29.41% 00:24:44.855 lat (msec) : 250=44.92%, 500=0.53% 00:24:44.855 cpu : usr=0.33%, sys=2.45%, ctx=1581, majf=0, minf=4097 00:24:44.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:44.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.855 issued rwts: total=6609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.855 job2: (groupid=0, jobs=1): err= 0: pid=3634744: Fri Jul 12 13:32:40 2024 00:24:44.855 read: IOPS=587, BW=147MiB/s (154MB/s)(1485MiB/10114msec) 00:24:44.855 slat (usec): min=10, max=127245, avg=1323.03, stdev=4921.30 00:24:44.855 clat (msec): min=3, max=279, avg=107.53, stdev=49.96 00:24:44.855 lat (msec): min=3, max=279, avg=108.86, stdev=50.71 00:24:44.855 clat percentiles (msec): 00:24:44.855 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 40], 20.00th=[ 63], 00:24:44.855 | 30.00th=[ 77], 40.00th=[ 89], 50.00th=[ 108], 60.00th=[ 127], 00:24:44.855 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 169], 95.00th=[ 182], 00:24:44.855 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 279], 99.95th=[ 279], 00:24:44.855 | 99.99th=[ 279] 00:24:44.855 bw ( KiB/s): min=93696, max=226304, per=8.27%, avg=150451.20, stdev=36498.88, samples=20 00:24:44.855 iops : min= 366, max= 884, avg=587.70, stdev=142.57, samples=20 00:24:44.855 lat (msec) : 4=0.02%, 10=0.51%, 20=2.26%, 50=11.11%, 100=31.99% 00:24:44.855 lat (msec) : 250=53.50%, 500=0.62% 00:24:44.855 cpu : usr=0.43%, sys=2.14%, ctx=1316, majf=0, minf=4097 00:24:44.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:44.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.855 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.855 job3: (groupid=0, jobs=1): err= 0: pid=3634748: Fri Jul 12 13:32:40 2024 00:24:44.855 read: IOPS=533, BW=133MiB/s (140MB/s)(1342MiB/10052msec) 00:24:44.855 slat (usec): min=14, max=119117, avg=1512.39, stdev=5807.36 00:24:44.855 clat (msec): min=12, max=371, avg=118.24, stdev=56.54 00:24:44.855 lat (msec): min=12, max=371, avg=119.75, stdev=57.38 00:24:44.855 clat percentiles (msec): 00:24:44.855 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 57], 20.00th=[ 75], 00:24:44.855 | 30.00th=[ 88], 40.00th=[ 97], 50.00th=[ 109], 60.00th=[ 124], 00:24:44.855 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 230], 00:24:44.855 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 359], 00:24:44.855 | 99.99th=[ 372] 00:24:44.855 bw ( KiB/s): min=54272, max=225792, per=7.46%, avg=135765.60, stdev=40143.64, samples=20 00:24:44.855 iops : min= 212, max= 882, avg=530.30, stdev=156.78, samples=20 00:24:44.856 lat (msec) : 20=0.48%, 50=6.82%, 100=35.33%, 250=53.38%, 500=3.99% 00:24:44.856 cpu : usr=0.32%, sys=2.00%, ctx=1253, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=5367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job4: (groupid=0, jobs=1): err= 0: pid=3634753: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=610, BW=153MiB/s (160MB/s)(1536MiB/10059msec) 00:24:44.856 slat (usec): min=10, max=128486, avg=1284.88, stdev=5087.93 00:24:44.856 clat (msec): min=4, max=282, avg=103.40, stdev=49.84 00:24:44.856 lat (msec): min=4, max=282, avg=104.69, stdev=50.65 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 60], 00:24:44.856 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 110], 00:24:44.856 | 70.00th=[ 132], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 192], 00:24:44.856 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 253], 00:24:44.856 | 99.99th=[ 284] 00:24:44.856 bw ( KiB/s): min=79872, max=267776, per=8.55%, avg=155662.65, stdev=51562.26, samples=20 00:24:44.856 iops : min= 312, max= 1046, avg=608.05, stdev=201.42, samples=20 00:24:44.856 lat (msec) : 10=0.91%, 20=1.32%, 50=13.04%, 100=37.22%, 250=47.43% 00:24:44.856 lat (msec) : 500=0.08% 00:24:44.856 cpu : usr=0.45%, sys=2.19%, ctx=1396, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job5: (groupid=0, jobs=1): err= 0: pid=3634777: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=592, BW=148MiB/s (155MB/s)(1498MiB/10108msec) 00:24:44.856 slat (usec): min=10, max=153189, avg=1340.61, stdev=6327.31 00:24:44.856 clat (msec): min=2, max=339, avg=106.50, stdev=69.97 00:24:44.856 lat (msec): min=2, max=438, avg=107.84, stdev=71.13 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 21], 20.00th=[ 32], 00:24:44.856 | 30.00th=[ 49], 40.00th=[ 88], 50.00th=[ 111], 60.00th=[ 128], 00:24:44.856 | 70.00th=[ 140], 80.00th=[ 161], 90.00th=[ 194], 95.00th=[ 236], 00:24:44.856 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 338], 00:24:44.856 | 99.99th=[ 338] 00:24:44.856 bw ( KiB/s): min=56832, max=383233, per=8.34%, avg=151769.65, stdev=81621.89, samples=20 00:24:44.856 iops : min= 222, max= 1497, avg=592.85, stdev=318.83, samples=20 00:24:44.856 lat (msec) : 4=0.32%, 10=2.49%, 20=7.06%, 50=21.21%, 100=13.18% 00:24:44.856 lat (msec) : 250=51.79%, 500=3.95% 00:24:44.856 cpu : usr=0.32%, sys=2.03%, ctx=1322, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=5993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job6: (groupid=0, jobs=1): err= 0: pid=3634792: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=563, BW=141MiB/s (148MB/s)(1424MiB/10102msec) 00:24:44.856 slat (usec): min=12, max=186904, avg=1379.26, stdev=5419.19 00:24:44.856 clat (msec): min=8, max=281, avg=112.03, stdev=45.35 00:24:44.856 lat (msec): min=8, max=390, avg=113.41, stdev=46.09 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 28], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 72], 00:24:44.856 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 120], 00:24:44.856 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 192], 00:24:44.856 | 99.00th=[ 253], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 284], 00:24:44.856 | 99.99th=[ 284] 00:24:44.856 bw ( KiB/s): min=86016, max=209920, per=7.92%, avg=144161.20, stdev=43200.73, samples=20 00:24:44.856 iops : min= 336, max= 820, avg=563.10, stdev=168.73, samples=20 00:24:44.856 lat (msec) : 10=0.04%, 20=0.23%, 50=5.16%, 100=37.81%, 250=55.59% 00:24:44.856 lat (msec) : 500=1.18% 00:24:44.856 cpu : usr=0.35%, sys=2.06%, ctx=1308, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=5695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job7: (groupid=0, jobs=1): err= 0: pid=3634810: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=942, BW=236MiB/s (247MB/s)(2370MiB/10055msec) 00:24:44.856 slat (usec): min=13, max=104924, avg=871.53, stdev=3123.54 00:24:44.856 clat (usec): min=1402, max=237258, avg=66954.56, stdev=39434.36 00:24:44.856 lat (usec): min=1459, max=250624, avg=67826.09, stdev=39900.29 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 33], 00:24:44.856 | 30.00th=[ 36], 40.00th=[ 44], 50.00th=[ 55], 60.00th=[ 71], 00:24:44.856 | 70.00th=[ 86], 80.00th=[ 100], 90.00th=[ 124], 95.00th=[ 146], 00:24:44.856 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 230], 99.95th=[ 230], 00:24:44.856 | 99.99th=[ 239] 00:24:44.856 bw ( KiB/s): min=108544, max=499200, per=13.25%, avg=241034.20, stdev=116343.77, samples=20 00:24:44.856 iops : min= 424, max= 1950, avg=941.50, stdev=454.50, samples=20 00:24:44.856 lat (msec) : 2=0.04%, 4=0.06%, 10=1.29%, 20=1.19%, 50=43.62% 00:24:44.856 lat (msec) : 100=34.35%, 250=19.44% 00:24:44.856 cpu : usr=0.56%, sys=3.20%, ctx=1969, majf=0, minf=3721 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=9479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job8: (groupid=0, jobs=1): err= 0: pid=3634868: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=706, BW=177MiB/s (185MB/s)(1781MiB/10078msec) 00:24:44.856 slat (usec): min=9, max=144176, avg=1114.09, stdev=5083.90 00:24:44.856 clat (usec): min=1717, max=307253, avg=89323.30, stdev=52597.53 00:24:44.856 lat (usec): min=1747, max=322849, avg=90437.39, stdev=53372.85 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 44], 00:24:44.856 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 81], 60.00th=[ 99], 00:24:44.856 | 70.00th=[ 118], 80.00th=[ 142], 90.00th=[ 165], 95.00th=[ 182], 00:24:44.856 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 226], 99.95th=[ 230], 00:24:44.856 | 99.99th=[ 309] 00:24:44.856 bw ( KiB/s): min=79872, max=340480, per=9.94%, avg=180787.20, stdev=71755.59, samples=20 00:24:44.856 iops : min= 312, max= 1330, avg=706.20, stdev=280.30, samples=20 00:24:44.856 lat (msec) : 2=0.17%, 4=0.31%, 10=3.47%, 20=5.39%, 50=16.70% 00:24:44.856 lat (msec) : 100=35.23%, 250=38.71%, 500=0.03% 00:24:44.856 cpu : usr=0.43%, sys=2.41%, ctx=1711, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=7125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job9: (groupid=0, jobs=1): err= 0: pid=3634891: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=518, BW=130MiB/s (136MB/s)(1311MiB/10113msec) 00:24:44.856 slat (usec): min=14, max=161895, avg=1588.80, stdev=6583.49 00:24:44.856 clat (usec): min=1543, max=434977, avg=121698.84, stdev=68612.99 00:24:44.856 lat (usec): min=1563, max=435038, avg=123287.64, stdev=69850.98 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 63], 00:24:44.856 | 30.00th=[ 93], 40.00th=[ 112], 50.00th=[ 124], 60.00th=[ 136], 00:24:44.856 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 199], 95.00th=[ 241], 00:24:44.856 | 99.00th=[ 309], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 435], 00:24:44.856 | 99.99th=[ 435] 00:24:44.856 bw ( KiB/s): min=54272, max=271872, per=7.29%, avg=132608.00, stdev=51414.86, samples=20 00:24:44.856 iops : min= 212, max= 1062, avg=518.00, stdev=200.84, samples=20 00:24:44.856 lat (msec) : 2=0.17%, 4=1.68%, 10=1.96%, 20=5.65%, 50=8.64% 00:24:44.856 lat (msec) : 100=15.74%, 250=61.83%, 500=4.33% 00:24:44.856 cpu : usr=0.30%, sys=1.85%, ctx=1390, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=5243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 job10: (groupid=0, jobs=1): err= 0: pid=3634900: Fri Jul 12 13:32:40 2024 00:24:44.856 read: IOPS=657, BW=164MiB/s (172MB/s)(1646MiB/10014msec) 00:24:44.856 slat (usec): min=14, max=74807, avg=1260.49, stdev=4463.75 00:24:44.856 clat (msec): min=5, max=377, avg=96.02, stdev=61.14 00:24:44.856 lat (msec): min=5, max=377, avg=97.28, stdev=62.00 00:24:44.856 clat percentiles (msec): 00:24:44.856 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 41], 00:24:44.856 | 30.00th=[ 52], 40.00th=[ 70], 50.00th=[ 84], 60.00th=[ 103], 00:24:44.856 | 70.00th=[ 127], 80.00th=[ 142], 90.00th=[ 171], 95.00th=[ 201], 00:24:44.856 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 351], 00:24:44.856 | 99.99th=[ 380] 00:24:44.856 bw ( KiB/s): min=54784, max=360448, per=8.64%, avg=157237.89, stdev=83222.69, samples=19 00:24:44.856 iops : min= 214, max= 1408, avg=614.21, stdev=325.09, samples=19 00:24:44.856 lat (msec) : 10=1.03%, 20=2.46%, 50=25.45%, 100=30.01%, 250=37.94% 00:24:44.856 lat (msec) : 500=3.11% 00:24:44.856 cpu : usr=0.38%, sys=2.36%, ctx=1535, majf=0, minf=4097 00:24:44.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:44.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.856 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.856 00:24:44.856 Run status group 0 (all jobs): 00:24:44.856 READ: bw=1777MiB/s (1863MB/s), 130MiB/s-236MiB/s (136MB/s-247MB/s), io=17.6GiB (18.8GB), run=10014-10114msec 00:24:44.856 00:24:44.856 Disk stats (read/write): 00:24:44.856 nvme0n1: ios=15205/0, merge=0/0, ticks=1226602/0, in_queue=1226602, util=96.95% 00:24:44.856 nvme10n1: ios=13022/0, merge=0/0, ticks=1231232/0, in_queue=1231232, util=97.17% 00:24:44.856 nvme1n1: ios=11709/0, merge=0/0, ticks=1235425/0, in_queue=1235425, util=97.45% 00:24:44.856 nvme2n1: ios=10483/0, merge=0/0, ticks=1232728/0, in_queue=1232728, util=97.60% 00:24:44.857 nvme3n1: ios=12052/0, merge=0/0, ticks=1230448/0, in_queue=1230448, util=97.68% 00:24:44.857 nvme4n1: ios=11766/0, merge=0/0, ticks=1229539/0, in_queue=1229539, util=98.06% 00:24:44.857 nvme5n1: ios=11158/0, merge=0/0, ticks=1231808/0, in_queue=1231808, util=98.25% 00:24:44.857 nvme6n1: ios=18676/0, merge=0/0, ticks=1234335/0, in_queue=1234335, util=98.36% 00:24:44.857 nvme7n1: ios=14005/0, merge=0/0, ticks=1227708/0, in_queue=1227708, util=98.82% 00:24:44.857 nvme8n1: ios=10285/0, merge=0/0, ticks=1228703/0, in_queue=1228703, util=99.04% 00:24:44.857 nvme9n1: ios=12738/0, merge=0/0, ticks=1234150/0, in_queue=1234150, util=99.20% 00:24:44.857 13:32:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:44.857 [global] 00:24:44.857 thread=1 00:24:44.857 invalidate=1 00:24:44.857 rw=randwrite 00:24:44.857 time_based=1 00:24:44.857 runtime=10 00:24:44.857 ioengine=libaio 00:24:44.857 direct=1 00:24:44.857 bs=262144 00:24:44.857 iodepth=64 00:24:44.857 norandommap=1 00:24:44.857 numjobs=1 00:24:44.857 00:24:44.857 [job0] 00:24:44.857 filename=/dev/nvme0n1 00:24:44.857 [job1] 00:24:44.857 filename=/dev/nvme10n1 00:24:44.857 [job2] 00:24:44.857 filename=/dev/nvme1n1 00:24:44.857 [job3] 00:24:44.857 filename=/dev/nvme2n1 00:24:44.857 [job4] 00:24:44.857 filename=/dev/nvme3n1 00:24:44.857 [job5] 00:24:44.857 filename=/dev/nvme4n1 00:24:44.857 [job6] 00:24:44.857 filename=/dev/nvme5n1 00:24:44.857 [job7] 00:24:44.857 filename=/dev/nvme6n1 00:24:44.857 [job8] 00:24:44.857 filename=/dev/nvme7n1 00:24:44.857 [job9] 00:24:44.857 filename=/dev/nvme8n1 00:24:44.857 [job10] 00:24:44.857 filename=/dev/nvme9n1 00:24:44.857 Could not set queue depth (nvme0n1) 00:24:44.857 Could not set queue depth (nvme10n1) 00:24:44.857 Could not set queue depth (nvme1n1) 00:24:44.857 Could not set queue depth (nvme2n1) 00:24:44.857 Could not set queue depth (nvme3n1) 00:24:44.857 Could not set queue depth (nvme4n1) 00:24:44.857 Could not set queue depth (nvme5n1) 00:24:44.857 Could not set queue depth (nvme6n1) 00:24:44.857 Could not set queue depth (nvme7n1) 00:24:44.857 Could not set queue depth (nvme8n1) 00:24:44.857 Could not set queue depth (nvme9n1) 00:24:44.857 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.857 fio-3.35 00:24:44.857 Starting 11 threads 00:24:54.816 00:24:54.816 job0: (groupid=0, jobs=1): err= 0: pid=3635918: Fri Jul 12 13:32:51 2024 00:24:54.816 write: IOPS=520, BW=130MiB/s (137MB/s)(1318MiB/10121msec); 0 zone resets 00:24:54.816 slat (usec): min=15, max=64908, avg=967.83, stdev=3216.68 00:24:54.816 clat (usec): min=1798, max=2245.9k, avg=121823.71, stdev=105090.44 00:24:54.816 lat (usec): min=1868, max=2246.0k, avg=122791.54, stdev=105462.74 00:24:54.816 clat percentiles (msec): 00:24:54.816 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 66], 00:24:54.816 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 121], 00:24:54.816 | 70.00th=[ 144], 80.00th=[ 171], 90.00th=[ 199], 95.00th=[ 249], 00:24:54.816 | 99.00th=[ 275], 99.50th=[ 384], 99.90th=[ 2089], 99.95th=[ 2106], 00:24:54.816 | 99.99th=[ 2232] 00:24:54.816 bw ( KiB/s): min=62976, max=210432, per=10.76%, avg=133350.40, stdev=40870.80, samples=20 00:24:54.816 iops : min= 246, max= 822, avg=520.90, stdev=159.65, samples=20 00:24:54.816 lat (msec) : 2=0.04%, 4=0.21%, 10=1.02%, 20=2.69%, 50=11.65% 00:24:54.816 lat (msec) : 100=29.65%, 250=49.94%, 500=4.55%, 750=0.02%, 1000=0.04% 00:24:54.816 lat (msec) : 2000=0.06%, >=2000=0.13% 00:24:54.816 cpu : usr=1.59%, sys=1.84%, ctx=3655, majf=0, minf=1 00:24:54.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:54.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.816 issued rwts: total=0,5272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.816 job1: (groupid=0, jobs=1): err= 0: pid=3635919: Fri Jul 12 13:32:51 2024 00:24:54.816 write: IOPS=727, BW=182MiB/s (191MB/s)(1833MiB/10080msec); 0 zone resets 00:24:54.816 slat (usec): min=16, max=64459, avg=869.21, stdev=2456.12 00:24:54.816 clat (usec): min=1277, max=437049, avg=87046.89, stdev=53051.62 00:24:54.816 lat (usec): min=1324, max=437085, avg=87916.09, stdev=53374.41 00:24:54.816 clat percentiles (msec): 00:24:54.816 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 50], 00:24:54.816 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 88], 00:24:54.816 | 70.00th=[ 111], 80.00th=[ 127], 90.00th=[ 165], 95.00th=[ 184], 00:24:54.816 | 99.00th=[ 251], 99.50th=[ 296], 99.90th=[ 388], 99.95th=[ 414], 00:24:54.816 | 99.99th=[ 439] 00:24:54.816 bw ( KiB/s): min=90112, max=325120, per=15.01%, avg=186110.40, stdev=70346.58, samples=20 00:24:54.816 iops : min= 352, max= 1270, avg=726.95, stdev=274.76, samples=20 00:24:54.816 lat (msec) : 2=0.10%, 4=0.20%, 10=1.20%, 20=2.39%, 50=18.90% 00:24:54.816 lat (msec) : 100=41.90%, 250=34.29%, 500=1.02% 00:24:54.816 cpu : usr=2.30%, sys=2.31%, ctx=3664, majf=0, minf=1 00:24:54.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:54.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.816 issued rwts: total=0,7332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.816 job2: (groupid=0, jobs=1): err= 0: pid=3635920: Fri Jul 12 13:32:51 2024 00:24:54.816 write: IOPS=507, BW=127MiB/s (133MB/s)(1290MiB/10175msec); 0 zone resets 00:24:54.816 slat (usec): min=20, max=114643, avg=1522.43, stdev=4274.16 00:24:54.816 clat (msec): min=2, max=533, avg=124.60, stdev=80.97 00:24:54.816 lat (msec): min=2, max=533, avg=126.12, stdev=81.60 00:24:54.816 clat percentiles (msec): 00:24:54.816 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 43], 20.00th=[ 47], 00:24:54.816 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 118], 00:24:54.816 | 70.00th=[ 146], 80.00th=[ 174], 90.00th=[ 236], 95.00th=[ 288], 00:24:54.816 | 99.00th=[ 384], 99.50th=[ 426], 99.90th=[ 527], 99.95th=[ 535], 00:24:54.816 | 99.99th=[ 535] 00:24:54.816 bw ( KiB/s): min=50688, max=351744, per=10.53%, avg=130483.20, stdev=59496.97, samples=20 00:24:54.816 iops : min= 198, max= 1374, avg=509.70, stdev=232.41, samples=20 00:24:54.816 lat (msec) : 4=0.27%, 10=1.45%, 20=1.30%, 50=17.73%, 100=21.10% 00:24:54.816 lat (msec) : 250=50.00%, 500=7.97%, 750=0.17% 00:24:54.816 cpu : usr=1.47%, sys=1.81%, ctx=2158, majf=0, minf=1 00:24:54.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:54.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.816 issued rwts: total=0,5160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.816 job3: (groupid=0, jobs=1): err= 0: pid=3635932: Fri Jul 12 13:32:51 2024 00:24:54.816 write: IOPS=370, BW=92.6MiB/s (97.1MB/s)(945MiB/10196msec); 0 zone resets 00:24:54.816 slat (usec): min=22, max=1269.0k, avg=2282.34, stdev=22986.61 00:24:54.816 clat (usec): min=1716, max=2168.1k, avg=170297.64, stdev=274918.00 00:24:54.816 lat (msec): min=2, max=2168, avg=172.58, stdev=277.68 00:24:54.816 clat percentiles (msec): 00:24:54.816 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:24:54.816 | 30.00th=[ 65], 40.00th=[ 84], 50.00th=[ 107], 60.00th=[ 117], 00:24:54.816 | 70.00th=[ 161], 80.00th=[ 234], 90.00th=[ 305], 95.00th=[ 401], 00:24:54.816 | 99.00th=[ 2140], 99.50th=[ 2165], 99.90th=[ 2165], 99.95th=[ 2165], 00:24:54.816 | 99.99th=[ 2165] 00:24:54.816 bw ( KiB/s): min= 2048, max=267776, per=8.52%, avg=105673.83, stdev=71738.06, samples=18 00:24:54.816 iops : min= 8, max= 1046, avg=412.78, stdev=280.24, samples=18 00:24:54.817 lat (msec) : 2=0.05%, 4=0.26%, 10=2.73%, 20=0.34%, 50=10.22% 00:24:54.817 lat (msec) : 100=33.22%, 250=36.08%, 500=14.58%, 750=0.74%, 1000=0.11% 00:24:54.817 lat (msec) : 2000=0.11%, >=2000=1.56% 00:24:54.817 cpu : usr=1.24%, sys=1.05%, ctx=1671, majf=0, minf=1 00:24:54.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.817 issued rwts: total=0,3778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.817 job4: (groupid=0, jobs=1): err= 0: pid=3635933: Fri Jul 12 13:32:51 2024 00:24:54.817 write: IOPS=318, BW=79.5MiB/s (83.4MB/s)(807MiB/10144msec); 0 zone resets 00:24:54.817 slat (usec): min=18, max=1002.5k, avg=2110.40, stdev=23497.47 00:24:54.817 clat (usec): min=1136, max=2159.9k, avg=198955.66, stdev=291928.45 00:24:54.817 lat (usec): min=1171, max=2160.0k, avg=201066.07, stdev=294538.36 00:24:54.817 clat percentiles (msec): 00:24:54.817 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 64], 00:24:54.817 | 30.00th=[ 87], 40.00th=[ 115], 50.00th=[ 144], 60.00th=[ 167], 00:24:54.817 | 70.00th=[ 203], 80.00th=[ 251], 90.00th=[ 330], 95.00th=[ 409], 00:24:54.817 | 99.00th=[ 2123], 99.50th=[ 2140], 99.90th=[ 2165], 99.95th=[ 2165], 00:24:54.817 | 99.99th=[ 2165] 00:24:54.817 bw ( KiB/s): min= 2048, max=163840, per=7.26%, avg=89976.00, stdev=44597.61, samples=18 00:24:54.817 iops : min= 8, max= 640, avg=351.44, stdev=174.23, samples=18 00:24:54.817 lat (msec) : 2=0.19%, 4=0.46%, 10=1.92%, 20=4.22%, 50=8.37% 00:24:54.817 lat (msec) : 100=19.71%, 250=45.16%, 500=15.87%, 750=2.01%, 2000=0.25% 00:24:54.817 lat (msec) : >=2000=1.83% 00:24:54.817 cpu : usr=0.91%, sys=1.09%, ctx=1963, majf=0, minf=1 00:24:54.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:24:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.817 issued rwts: total=0,3226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.817 job5: (groupid=0, jobs=1): err= 0: pid=3635934: Fri Jul 12 13:32:51 2024 00:24:54.817 write: IOPS=438, BW=110MiB/s (115MB/s)(1116MiB/10178msec); 0 zone resets 00:24:54.817 slat (usec): min=23, max=85466, avg=1625.83, stdev=4309.43 00:24:54.817 clat (msec): min=3, max=399, avg=144.12, stdev=62.58 00:24:54.817 lat (msec): min=5, max=402, avg=145.75, stdev=63.22 00:24:54.817 clat percentiles (msec): 00:24:54.817 | 1.00th=[ 20], 5.00th=[ 53], 10.00th=[ 82], 20.00th=[ 107], 00:24:54.817 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 136], 60.00th=[ 148], 00:24:54.817 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 230], 95.00th=[ 271], 00:24:54.817 | 99.00th=[ 347], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 393], 00:24:54.817 | 99.99th=[ 401] 00:24:54.817 bw ( KiB/s): min=48640, max=161792, per=9.09%, avg=112700.85, stdev=29092.97, samples=20 00:24:54.817 iops : min= 190, max= 632, avg=440.20, stdev=113.67, samples=20 00:24:54.817 lat (msec) : 4=0.02%, 10=0.22%, 20=0.85%, 50=3.67%, 100=11.20% 00:24:54.817 lat (msec) : 250=76.17%, 500=7.86% 00:24:54.817 cpu : usr=1.25%, sys=1.60%, ctx=2255, majf=0, minf=1 00:24:54.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.817 issued rwts: total=0,4465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.817 job6: (groupid=0, jobs=1): err= 0: pid=3635935: Fri Jul 12 13:32:51 2024 00:24:54.817 write: IOPS=280, BW=70.1MiB/s (73.5MB/s)(714MiB/10192msec); 0 zone resets 00:24:54.817 slat (usec): min=24, max=1057.4k, avg=2957.98, stdev=25144.79 00:24:54.817 clat (msec): min=7, max=2159, avg=225.22, stdev=303.98 00:24:54.817 lat (msec): min=7, max=2159, avg=228.18, stdev=307.04 00:24:54.817 clat percentiles (msec): 00:24:54.817 | 1.00th=[ 27], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 68], 00:24:54.817 | 30.00th=[ 102], 40.00th=[ 138], 50.00th=[ 171], 60.00th=[ 209], 00:24:54.817 | 70.00th=[ 245], 80.00th=[ 275], 90.00th=[ 342], 95.00th=[ 460], 00:24:54.817 | 99.00th=[ 2140], 99.50th=[ 2165], 99.90th=[ 2165], 99.95th=[ 2165], 00:24:54.817 | 99.99th=[ 2165] 00:24:54.817 bw ( KiB/s): min= 2048, max=242688, per=6.41%, avg=79419.61, stdev=56610.20, samples=18 00:24:54.817 iops : min= 8, max= 948, avg=310.22, stdev=221.14, samples=18 00:24:54.817 lat (msec) : 10=0.07%, 20=0.42%, 50=3.99%, 100=25.18%, 250=42.75% 00:24:54.817 lat (msec) : 500=24.30%, 750=0.95%, 2000=0.28%, >=2000=2.07% 00:24:54.817 cpu : usr=0.85%, sys=1.05%, ctx=1347, majf=0, minf=1 00:24:54.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:24:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.817 issued rwts: total=0,2856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.817 job7: (groupid=0, jobs=1): err= 0: pid=3635936: Fri Jul 12 13:32:51 2024 00:24:54.817 write: IOPS=285, BW=71.3MiB/s (74.8MB/s)(723MiB/10142msec); 0 zone resets 00:24:54.817 slat (usec): min=21, max=1021.4k, avg=2616.06, stdev=25067.16 00:24:54.817 clat (usec): min=1938, max=2216.1k, avg=221679.73, stdev=314401.91 00:24:54.817 lat (msec): min=2, max=2216, avg=224.30, stdev=317.73 00:24:54.817 clat percentiles (msec): 00:24:54.817 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 68], 00:24:54.817 | 30.00th=[ 87], 40.00th=[ 93], 50.00th=[ 150], 60.00th=[ 186], 00:24:54.817 | 70.00th=[ 253], 80.00th=[ 305], 90.00th=[ 376], 95.00th=[ 514], 00:24:54.817 | 99.00th=[ 2165], 99.50th=[ 2198], 99.90th=[ 2232], 99.95th=[ 2232], 00:24:54.817 | 99.99th=[ 2232] 00:24:54.817 bw ( KiB/s): min= 2048, max=211968, per=6.49%, avg=80472.28, stdev=58895.15, samples=18 00:24:54.817 iops : min= 8, max= 828, avg=314.33, stdev=230.07, samples=18 00:24:54.817 lat (msec) : 2=0.03%, 4=0.24%, 10=1.80%, 20=2.90%, 50=7.64% 00:24:54.817 lat (msec) : 100=30.71%, 250=26.21%, 500=24.90%, 750=3.25%, 2000=0.28% 00:24:54.817 lat (msec) : >=2000=2.04% 00:24:54.817 cpu : usr=0.90%, sys=0.99%, ctx=1876, majf=0, minf=1 00:24:54.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:24:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.817 issued rwts: total=0,2892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.817 job8: (groupid=0, jobs=1): err= 0: pid=3635938: Fri Jul 12 13:32:51 2024 00:24:54.817 write: IOPS=378, BW=94.5MiB/s (99.1MB/s)(954MiB/10089msec); 0 zone resets 00:24:54.817 slat (usec): min=23, max=1541.1k, avg=1714.54, stdev=25349.21 00:24:54.817 clat (usec): min=1505, max=1933.0k, avg=167492.34, stdev=235591.24 00:24:54.817 lat (usec): min=1603, max=1933.0k, avg=169206.87, stdev=237534.38 00:24:54.817 clat percentiles (msec): 00:24:54.817 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 71], 00:24:54.817 | 30.00th=[ 86], 40.00th=[ 109], 50.00th=[ 124], 60.00th=[ 138], 00:24:54.817 | 70.00th=[ 159], 80.00th=[ 197], 90.00th=[ 296], 95.00th=[ 422], 00:24:54.817 | 99.00th=[ 1770], 99.50th=[ 1854], 99.90th=[ 1921], 99.95th=[ 1938], 00:24:54.817 | 99.99th=[ 1938] 00:24:54.817 bw ( KiB/s): min=22528, max=193536, per=8.61%, avg=106695.11, stdev=42200.56, samples=18 00:24:54.817 iops : min= 88, max= 756, avg=416.78, stdev=164.85, samples=18 00:24:54.817 lat (msec) : 2=0.10%, 4=0.34%, 10=2.25%, 20=4.56%, 50=7.97% 00:24:54.817 lat (msec) : 100=20.08%, 250=52.88%, 500=8.91%, 750=1.23%, 2000=1.65% 00:24:54.818 cpu : usr=1.21%, sys=1.33%, ctx=2362, majf=0, minf=1 00:24:54.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:54.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.818 issued rwts: total=0,3814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.818 job9: (groupid=0, jobs=1): err= 0: pid=3635944: Fri Jul 12 13:32:51 2024 00:24:54.818 write: IOPS=297, BW=74.5MiB/s (78.1MB/s)(759MiB/10185msec); 0 zone resets 00:24:54.818 slat (usec): min=25, max=1900.9k, avg=2841.55, stdev=35541.46 00:24:54.818 clat (usec): min=1885, max=2079.0k, avg=211718.12, stdev=278818.03 00:24:54.818 lat (usec): min=1911, max=2118.1k, avg=214559.66, stdev=281546.20 00:24:54.818 clat percentiles (msec): 00:24:54.818 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 59], 20.00th=[ 87], 00:24:54.818 | 30.00th=[ 93], 40.00th=[ 114], 50.00th=[ 150], 60.00th=[ 186], 00:24:54.818 | 70.00th=[ 226], 80.00th=[ 271], 90.00th=[ 326], 95.00th=[ 485], 00:24:54.818 | 99.00th=[ 1905], 99.50th=[ 2056], 99.90th=[ 2072], 99.95th=[ 2072], 00:24:54.818 | 99.99th=[ 2072] 00:24:54.818 bw ( KiB/s): min=32768, max=183808, per=7.22%, avg=89517.24, stdev=46308.40, samples=17 00:24:54.818 iops : min= 128, max= 718, avg=349.65, stdev=180.91, samples=17 00:24:54.818 lat (msec) : 2=0.07%, 4=0.26%, 10=0.82%, 20=1.32%, 50=5.77% 00:24:54.818 lat (msec) : 100=25.86%, 250=41.02%, 500=20.20%, 750=2.60%, 2000=1.38% 00:24:54.818 lat (msec) : >=2000=0.69% 00:24:54.818 cpu : usr=0.94%, sys=1.10%, ctx=1360, majf=0, minf=1 00:24:54.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:24:54.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.818 issued rwts: total=0,3035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.818 job10: (groupid=0, jobs=1): err= 0: pid=3635945: Fri Jul 12 13:32:51 2024 00:24:54.818 write: IOPS=748, BW=187MiB/s (196MB/s)(1886MiB/10076msec); 0 zone resets 00:24:54.818 slat (usec): min=18, max=38581, avg=1236.18, stdev=2707.70 00:24:54.818 clat (usec): min=1900, max=339175, avg=84226.23, stdev=47146.73 00:24:54.818 lat (usec): min=1942, max=339227, avg=85462.41, stdev=47785.05 00:24:54.818 clat percentiles (msec): 00:24:54.818 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 51], 00:24:54.818 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 88], 00:24:54.818 | 70.00th=[ 103], 80.00th=[ 120], 90.00th=[ 136], 95.00th=[ 157], 00:24:54.818 | 99.00th=[ 266], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 326], 00:24:54.818 | 99.99th=[ 338] 00:24:54.818 bw ( KiB/s): min=63488, max=320000, per=15.45%, avg=191462.40, stdev=76587.27, samples=20 00:24:54.818 iops : min= 248, max= 1250, avg=747.90, stdev=299.17, samples=20 00:24:54.818 lat (msec) : 2=0.03%, 4=0.08%, 10=0.58%, 20=1.15%, 50=18.07% 00:24:54.818 lat (msec) : 100=48.12%, 250=30.52%, 500=1.45% 00:24:54.818 cpu : usr=2.30%, sys=2.43%, ctx=2513, majf=0, minf=1 00:24:54.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:54.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.818 issued rwts: total=0,7542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.818 00:24:54.818 Run status group 0 (all jobs): 00:24:54.818 WRITE: bw=1211MiB/s (1269MB/s), 70.1MiB/s-187MiB/s (73.5MB/s-196MB/s), io=12.1GiB (12.9GB), run=10076-10196msec 00:24:54.818 00:24:54.818 Disk stats (read/write): 00:24:54.818 nvme0n1: ios=48/10339, merge=0/0, ticks=1566/1217885, in_queue=1219451, util=99.50% 00:24:54.818 nvme10n1: ios=51/14443, merge=0/0, ticks=1330/1218295, in_queue=1219625, util=99.74% 00:24:54.818 nvme1n1: ios=49/10161, merge=0/0, ticks=38/1208045, in_queue=1208083, util=97.73% 00:24:54.818 nvme2n1: ios=50/7524, merge=0/0, ticks=2438/1237661, in_queue=1240099, util=100.00% 00:24:54.818 nvme3n1: ios=45/6290, merge=0/0, ticks=5950/1203425, in_queue=1209375, util=100.00% 00:24:54.818 nvme4n1: ios=45/8927, merge=0/0, ticks=317/1246609, in_queue=1246926, util=100.00% 00:24:54.818 nvme5n1: ios=45/5689, merge=0/0, ticks=2050/1240016, in_queue=1242066, util=100.00% 00:24:54.818 nvme6n1: ios=0/5599, merge=0/0, ticks=0/1214066, in_queue=1214066, util=98.38% 00:24:54.818 nvme7n1: ios=0/7268, merge=0/0, ticks=0/1224681, in_queue=1224681, util=98.77% 00:24:54.818 nvme8n1: ios=47/6060, merge=0/0, ticks=3543/969164, in_queue=972707, util=100.00% 00:24:54.818 nvme9n1: ios=0/14850, merge=0/0, ticks=0/1210391, in_queue=1210391, util=99.00% 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.818 13:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.818 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.818 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:55.076 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.076 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.334 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:55.592 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.592 13:32:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:55.850 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:55.850 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.850 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:56.107 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.107 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:56.365 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.365 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:56.622 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.622 13:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:56.622 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:56.622 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:56.622 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.879 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:56.879 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.880 rmmod nvme_tcp 00:24:56.880 rmmod nvme_fabrics 00:24:56.880 rmmod nvme_keyring 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3630488 ']' 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3630488 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 3630488 ']' 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 3630488 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3630488 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3630488' 00:24:56.880 killing process with pid 3630488 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 3630488 00:24:56.880 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 3630488 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.445 13:32:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.976 13:32:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.976 00:24:59.976 real 1m1.011s 00:24:59.976 user 3m21.816s 00:24:59.976 sys 0m24.538s 00:24:59.976 13:32:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.976 13:32:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.976 ************************************ 00:24:59.976 END TEST nvmf_multiconnection 00:24:59.976 ************************************ 00:24:59.976 13:32:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:59.976 13:32:56 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:59.976 13:32:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:59.976 13:32:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.976 13:32:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.976 ************************************ 00:24:59.976 START TEST nvmf_initiator_timeout 00:24:59.976 ************************************ 00:24:59.976 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:59.976 * Looking for test storage... 00:24:59.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.976 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.976 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.977 13:32:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.977 13:32:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:59.977 13:32:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:59.977 13:32:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:59.977 13:32:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.875 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:01.876 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:01.876 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:01.876 Found net devices under 0000:09:00.0: cvl_0_0 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:01.876 Found net devices under 0000:09:00.1: cvl_0_1 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:25:01.876 00:25:01.876 --- 10.0.0.2 ping statistics --- 00:25:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.876 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:25:01.876 00:25:01.876 --- 10.0.0.1 ping statistics --- 00:25:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.876 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3639137 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3639137 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3639137 ']' 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.876 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.876 [2024-07-12 13:32:59.325841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:25:01.876 [2024-07-12 13:32:59.325921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.134 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.134 [2024-07-12 13:32:59.362895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:02.134 [2024-07-12 13:32:59.387860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.134 [2024-07-12 13:32:59.470918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.134 [2024-07-12 13:32:59.470968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.134 [2024-07-12 13:32:59.470995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.134 [2024-07-12 13:32:59.471007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.134 [2024-07-12 13:32:59.471016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.134 [2024-07-12 13:32:59.471102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.134 [2024-07-12 13:32:59.471207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.134 [2024-07-12 13:32:59.471269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.134 [2024-07-12 13:32:59.471271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.134 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.134 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:02.134 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.134 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.134 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 Malloc0 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 Delay0 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 [2024-07-12 13:32:59.645939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:02.392 [2024-07-12 13:32:59.674183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.392 13:32:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:02.957 13:33:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:02.957 13:33:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.957 13:33:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.957 13:33:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.957 13:33:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3639666 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:04.854 13:33:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:04.854 [global] 00:25:04.854 thread=1 00:25:04.854 invalidate=1 00:25:04.854 rw=write 00:25:04.854 time_based=1 00:25:04.854 runtime=60 00:25:04.854 ioengine=libaio 00:25:04.854 direct=1 00:25:04.854 bs=4096 00:25:04.854 iodepth=1 00:25:04.854 norandommap=0 00:25:04.854 numjobs=1 00:25:04.854 00:25:04.854 verify_dump=1 00:25:04.854 verify_backlog=512 00:25:04.854 verify_state_save=0 00:25:04.854 do_verify=1 00:25:04.854 verify=crc32c-intel 00:25:04.854 [job0] 00:25:04.854 filename=/dev/nvme0n1 00:25:04.854 Could not set queue depth (nvme0n1) 00:25:05.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:05.111 fio-3.35 00:25:05.111 Starting 1 thread 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.417 true 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.417 true 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.417 true 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.417 true 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.417 13:33:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.935 true 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.935 true 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.935 true 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.935 true 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:10.935 13:33:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3639666 00:26:07.128 00:26:07.128 job0: (groupid=0, jobs=1): err= 0: pid=3639739: Fri Jul 12 13:34:02 2024 00:26:07.128 read: IOPS=7, BW=30.7KiB/s (31.5kB/s)(1844KiB/60026msec) 00:26:07.128 slat (usec): min=8, max=899, avg=23.37, stdev=41.97 00:26:07.128 clat (usec): min=482, max=40987k, avg=129845.76, stdev=1907038.17 00:26:07.128 lat (usec): min=537, max=40987k, avg=129869.14, stdev=1907037.74 00:26:07.128 clat percentiles (msec): 00:26:07.128 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:07.128 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:26:07.128 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 43], 95.00th=[ 43], 00:26:07.128 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:26:07.129 | 99.99th=[17113] 00:26:07.129 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60026msec); 0 zone resets 00:26:07.129 slat (nsec): min=6942, max=70703, avg=16552.95, stdev=10544.52 00:26:07.129 clat (usec): min=209, max=454, avg=277.14, stdev=54.07 00:26:07.129 lat (usec): min=216, max=474, avg=293.69, stdev=59.99 00:26:07.129 clat percentiles (usec): 00:26:07.129 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:26:07.129 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 273], 00:26:07.129 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 412], 00:26:07.129 | 99.00th=[ 433], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 453], 00:26:07.129 | 99.99th=[ 453] 00:26:07.129 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:07.129 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:07.129 lat (usec) : 250=20.86%, 500=31.86% 00:26:07.129 lat (msec) : 50=47.17%, >=2000=0.10% 00:26:07.129 cpu : usr=0.02%, sys=0.04%, ctx=975, majf=0, minf=2 00:26:07.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.129 issued rwts: total=461,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:07.129 00:26:07.129 Run status group 0 (all jobs): 00:26:07.129 READ: bw=30.7KiB/s (31.5kB/s), 30.7KiB/s-30.7KiB/s (31.5kB/s-31.5kB/s), io=1844KiB (1888kB), run=60026-60026msec 00:26:07.129 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60026-60026msec 00:26:07.129 00:26:07.129 Disk stats (read/write): 00:26:07.129 nvme0n1: ios=556/512, merge=0/0, ticks=18825/132, in_queue=18957, util=99.87% 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:07.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:07.129 nvmf hotplug test: fio successful as expected 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.129 rmmod nvme_tcp 00:26:07.129 rmmod nvme_fabrics 00:26:07.129 rmmod nvme_keyring 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3639137 ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3639137 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3639137 ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3639137 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639137 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639137' 00:26:07.129 killing process with pid 3639137 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3639137 00:26:07.129 13:34:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3639137 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.129 13:34:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.698 13:34:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:07.698 00:26:07.698 real 1m8.236s 00:26:07.698 user 4m10.652s 00:26:07.698 sys 0m6.461s 00:26:07.698 13:34:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.698 13:34:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.698 ************************************ 00:26:07.698 END TEST nvmf_initiator_timeout 00:26:07.698 ************************************ 00:26:07.957 13:34:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:07.957 13:34:05 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:07.957 13:34:05 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:07.957 13:34:05 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:07.957 13:34:05 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.957 13:34:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:09.863 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.863 13:34:07 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:09.864 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:09.864 Found net devices under 0000:09:00.0: cvl_0_0 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:09.864 Found net devices under 0000:09:00.1: cvl_0_1 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:09.864 13:34:07 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:09.864 13:34:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:09.864 13:34:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.864 13:34:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.864 ************************************ 00:26:09.864 START TEST nvmf_perf_adq 00:26:09.864 ************************************ 00:26:09.864 13:34:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:10.122 * Looking for test storage... 00:26:10.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.122 13:34:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.652 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:12.653 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:12.653 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:12.653 Found net devices under 0000:09:00.0: cvl_0_0 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:12.653 Found net devices under 0000:09:00.1: cvl_0_1 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:12.653 13:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:12.911 13:34:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:14.850 13:34:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:20.190 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:20.190 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:20.190 Found net devices under 0000:09:00.0: cvl_0_0 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:20.190 Found net devices under 0000:09:00.1: cvl_0_1 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.190 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:20.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:20.191 00:26:20.191 --- 10.0.0.2 ping statistics --- 00:26:20.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.191 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:26:20.191 00:26:20.191 --- 10.0.0.1 ping statistics --- 00:26:20.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.191 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3651823 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3651823 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3651823 ']' 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 [2024-07-12 13:34:17.325108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:26:20.191 [2024-07-12 13:34:17.325196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.191 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.191 [2024-07-12 13:34:17.362707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:20.191 [2024-07-12 13:34:17.390728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.191 [2024-07-12 13:34:17.478217] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.191 [2024-07-12 13:34:17.478262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.191 [2024-07-12 13:34:17.478289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.191 [2024-07-12 13:34:17.478300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.191 [2024-07-12 13:34:17.478310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.191 [2024-07-12 13:34:17.478392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.191 [2024-07-12 13:34:17.478697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.191 [2024-07-12 13:34:17.478759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.191 [2024-07-12 13:34:17.478763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.191 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 [2024-07-12 13:34:17.697928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 Malloc1 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:20.449 [2024-07-12 13:34:17.748962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3651903 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:20.449 13:34:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:20.449 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:22.345 "tick_rate": 2700000000, 00:26:22.345 "poll_groups": [ 00:26:22.345 { 00:26:22.345 "name": "nvmf_tgt_poll_group_000", 00:26:22.345 "admin_qpairs": 1, 00:26:22.345 "io_qpairs": 1, 00:26:22.345 "current_admin_qpairs": 1, 00:26:22.345 "current_io_qpairs": 1, 00:26:22.345 "pending_bdev_io": 0, 00:26:22.345 "completed_nvme_io": 20643, 00:26:22.345 "transports": [ 00:26:22.345 { 00:26:22.345 "trtype": "TCP" 00:26:22.345 } 00:26:22.345 ] 00:26:22.345 }, 00:26:22.345 { 00:26:22.345 "name": "nvmf_tgt_poll_group_001", 00:26:22.345 "admin_qpairs": 0, 00:26:22.345 "io_qpairs": 1, 00:26:22.345 "current_admin_qpairs": 0, 00:26:22.345 "current_io_qpairs": 1, 00:26:22.345 "pending_bdev_io": 0, 00:26:22.345 "completed_nvme_io": 17492, 00:26:22.345 "transports": [ 00:26:22.345 { 00:26:22.345 "trtype": "TCP" 00:26:22.345 } 00:26:22.345 ] 00:26:22.345 }, 00:26:22.345 { 00:26:22.345 "name": "nvmf_tgt_poll_group_002", 00:26:22.345 "admin_qpairs": 0, 00:26:22.345 "io_qpairs": 1, 00:26:22.345 "current_admin_qpairs": 0, 00:26:22.345 "current_io_qpairs": 1, 00:26:22.345 "pending_bdev_io": 0, 00:26:22.345 "completed_nvme_io": 18986, 00:26:22.345 "transports": [ 00:26:22.345 { 00:26:22.345 "trtype": "TCP" 00:26:22.345 } 00:26:22.345 ] 00:26:22.345 }, 00:26:22.345 { 00:26:22.345 "name": "nvmf_tgt_poll_group_003", 00:26:22.345 "admin_qpairs": 0, 00:26:22.345 "io_qpairs": 1, 00:26:22.345 "current_admin_qpairs": 0, 00:26:22.345 "current_io_qpairs": 1, 00:26:22.345 "pending_bdev_io": 0, 00:26:22.345 "completed_nvme_io": 21509, 00:26:22.345 "transports": [ 00:26:22.345 { 00:26:22.345 "trtype": "TCP" 00:26:22.345 } 00:26:22.345 ] 00:26:22.345 } 00:26:22.345 ] 00:26:22.345 }' 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:22.345 13:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3651903 00:26:30.444 Initializing NVMe Controllers 00:26:30.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:30.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:30.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:30.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:30.444 Initialization complete. Launching workers. 00:26:30.444 ======================================================== 00:26:30.444 Latency(us) 00:26:30.444 Device Information : IOPS MiB/s Average min max 00:26:30.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10818.10 42.26 5917.55 2564.15 7516.10 00:26:30.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9179.40 35.86 6971.47 2045.07 11618.08 00:26:30.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10018.00 39.13 6388.90 2871.53 9478.26 00:26:30.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11280.60 44.06 5674.10 2421.54 9328.41 00:26:30.444 ======================================================== 00:26:30.444 Total : 41296.10 161.31 6199.66 2045.07 11618.08 00:26:30.444 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.444 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.444 rmmod nvme_tcp 00:26:30.444 rmmod nvme_fabrics 00:26:30.702 rmmod nvme_keyring 00:26:30.702 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.702 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:30.702 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:30.702 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3651823 ']' 00:26:30.702 13:34:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3651823 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3651823 ']' 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3651823 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3651823 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3651823' 00:26:30.703 killing process with pid 3651823 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3651823 00:26:30.703 13:34:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3651823 00:26:30.961 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.961 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.961 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.962 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.962 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.962 13:34:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.962 13:34:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.962 13:34:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.864 13:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.864 13:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:32.864 13:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:33.441 13:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:35.968 13:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.279 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:41.280 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:41.280 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:41.280 Found net devices under 0000:09:00.0: cvl_0_0 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:41.280 Found net devices under 0000:09:00.1: cvl_0_1 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.280 13:34:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:26:41.280 00:26:41.280 --- 10.0.0.2 ping statistics --- 00:26:41.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.280 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:26:41.280 00:26:41.280 --- 10.0.0.1 ping statistics --- 00:26:41.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.280 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:41.280 net.core.busy_poll = 1 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:41.280 net.core.busy_read = 1 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:41.280 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3654513 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3654513 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3654513 ']' 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 [2024-07-12 13:34:38.330901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:26:41.281 [2024-07-12 13:34:38.330996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.281 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.281 [2024-07-12 13:34:38.369523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:41.281 [2024-07-12 13:34:38.396515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.281 [2024-07-12 13:34:38.477266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.281 [2024-07-12 13:34:38.477337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.281 [2024-07-12 13:34:38.477352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.281 [2024-07-12 13:34:38.477364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.281 [2024-07-12 13:34:38.477380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.281 [2024-07-12 13:34:38.477468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.281 [2024-07-12 13:34:38.477498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.281 [2024-07-12 13:34:38.477555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.281 [2024-07-12 13:34:38.477557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 [2024-07-12 13:34:38.720074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.281 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.540 Malloc1 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.540 [2024-07-12 13:34:38.771510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3654548 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:41.540 13:34:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:41.540 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:43.441 "tick_rate": 2700000000, 00:26:43.441 "poll_groups": [ 00:26:43.441 { 00:26:43.441 "name": "nvmf_tgt_poll_group_000", 00:26:43.441 "admin_qpairs": 1, 00:26:43.441 "io_qpairs": 2, 00:26:43.441 "current_admin_qpairs": 1, 00:26:43.441 "current_io_qpairs": 2, 00:26:43.441 "pending_bdev_io": 0, 00:26:43.441 "completed_nvme_io": 26384, 00:26:43.441 "transports": [ 00:26:43.441 { 00:26:43.441 "trtype": "TCP" 00:26:43.441 } 00:26:43.441 ] 00:26:43.441 }, 00:26:43.441 { 00:26:43.441 "name": "nvmf_tgt_poll_group_001", 00:26:43.441 "admin_qpairs": 0, 00:26:43.441 "io_qpairs": 2, 00:26:43.441 "current_admin_qpairs": 0, 00:26:43.441 "current_io_qpairs": 2, 00:26:43.441 "pending_bdev_io": 0, 00:26:43.441 "completed_nvme_io": 25884, 00:26:43.441 "transports": [ 00:26:43.441 { 00:26:43.441 "trtype": "TCP" 00:26:43.441 } 00:26:43.441 ] 00:26:43.441 }, 00:26:43.441 { 00:26:43.441 "name": "nvmf_tgt_poll_group_002", 00:26:43.441 "admin_qpairs": 0, 00:26:43.441 "io_qpairs": 0, 00:26:43.441 "current_admin_qpairs": 0, 00:26:43.441 "current_io_qpairs": 0, 00:26:43.441 "pending_bdev_io": 0, 00:26:43.441 "completed_nvme_io": 0, 00:26:43.441 "transports": [ 00:26:43.441 { 00:26:43.441 "trtype": "TCP" 00:26:43.441 } 00:26:43.441 ] 00:26:43.441 }, 00:26:43.441 { 00:26:43.441 "name": "nvmf_tgt_poll_group_003", 00:26:43.441 "admin_qpairs": 0, 00:26:43.441 "io_qpairs": 0, 00:26:43.441 "current_admin_qpairs": 0, 00:26:43.441 "current_io_qpairs": 0, 00:26:43.441 "pending_bdev_io": 0, 00:26:43.441 "completed_nvme_io": 0, 00:26:43.441 "transports": [ 00:26:43.441 { 00:26:43.441 "trtype": "TCP" 00:26:43.441 } 00:26:43.441 ] 00:26:43.441 } 00:26:43.441 ] 00:26:43.441 }' 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:43.441 13:34:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3654548 00:26:51.547 Initializing NVMe Controllers 00:26:51.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:51.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:51.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:51.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:51.547 Initialization complete. Launching workers. 00:26:51.547 ======================================================== 00:26:51.547 Latency(us) 00:26:51.547 Device Information : IOPS MiB/s Average min max 00:26:51.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5801.05 22.66 11034.81 1720.75 53580.48 00:26:51.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7907.73 30.89 8095.06 1766.37 54047.20 00:26:51.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6295.04 24.59 10198.01 1568.87 54116.05 00:26:51.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7532.93 29.43 8522.83 1642.23 54012.74 00:26:51.547 ======================================================== 00:26:51.548 Total : 27536.75 107.57 9312.13 1568.87 54116.05 00:26:51.548 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.548 13:34:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.548 rmmod nvme_tcp 00:26:51.548 rmmod nvme_fabrics 00:26:51.548 rmmod nvme_keyring 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3654513 ']' 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3654513 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3654513 ']' 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3654513 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.548 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3654513 00:26:51.805 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.805 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.805 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3654513' 00:26:51.805 killing process with pid 3654513 00:26:51.805 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3654513 00:26:51.805 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3654513 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.064 13:34:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.971 13:34:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.971 13:34:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:53.971 00:26:53.971 real 0m44.003s 00:26:53.971 user 2m32.838s 00:26:53.971 sys 0m12.223s 00:26:53.971 13:34:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.971 13:34:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.971 ************************************ 00:26:53.971 END TEST nvmf_perf_adq 00:26:53.971 ************************************ 00:26:53.971 13:34:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.971 13:34:51 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.971 13:34:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.971 13:34:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.971 13:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.971 ************************************ 00:26:53.971 START TEST nvmf_shutdown 00:26:53.971 ************************************ 00:26:53.971 13:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:54.233 * Looking for test storage... 00:26:54.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:54.233 ************************************ 00:26:54.233 START TEST nvmf_shutdown_tc1 00:26:54.233 ************************************ 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.233 13:34:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:56.136 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:56.136 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:56.136 Found net devices under 0000:09:00.0: cvl_0_0 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:56.136 Found net devices under 0000:09:00.1: cvl_0_1 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.136 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.137 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:26:56.396 00:26:56.396 --- 10.0.0.2 ping statistics --- 00:26:56.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.396 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:26:56.396 00:26:56.396 --- 10.0.0.1 ping statistics --- 00:26:56.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.396 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3657772 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3657772 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3657772 ']' 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.396 13:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.396 [2024-07-12 13:34:53.792568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:26:56.396 [2024-07-12 13:34:53.792654] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.396 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.396 [2024-07-12 13:34:53.831884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:56.396 [2024-07-12 13:34:53.858302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.654 [2024-07-12 13:34:53.950402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.654 [2024-07-12 13:34:53.950457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.654 [2024-07-12 13:34:53.950479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.654 [2024-07-12 13:34:53.950508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.655 [2024-07-12 13:34:53.950525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.655 [2024-07-12 13:34:53.950640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.655 [2024-07-12 13:34:53.950703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.655 [2024-07-12 13:34:53.950752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:56.655 [2024-07-12 13:34:53.950759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.655 [2024-07-12 13:34:54.110172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.655 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.912 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.913 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.913 Malloc1 00:26:56.913 [2024-07-12 13:34:54.198113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.913 Malloc2 00:26:56.913 Malloc3 00:26:56.913 Malloc4 00:26:56.913 Malloc5 00:26:57.171 Malloc6 00:26:57.171 Malloc7 00:26:57.171 Malloc8 00:26:57.171 Malloc9 00:26:57.171 Malloc10 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3657883 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3657883 /var/tmp/bdevperf.sock 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3657883 ']' 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:57.429 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:57.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.430 { 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme$subsystem", 00:26:57.430 "trtype": "$TEST_TRANSPORT", 00:26:57.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "$NVMF_PORT", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.430 "hdgst": ${hdgst:-false}, 00:26:57.430 "ddgst": ${ddgst:-false} 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 } 00:26:57.430 EOF 00:26:57.430 )") 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:57.430 13:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme1", 00:26:57.430 "trtype": "tcp", 00:26:57.430 "traddr": "10.0.0.2", 00:26:57.430 "adrfam": "ipv4", 00:26:57.430 "trsvcid": "4420", 00:26:57.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.430 "hdgst": false, 00:26:57.430 "ddgst": false 00:26:57.430 }, 00:26:57.430 "method": "bdev_nvme_attach_controller" 00:26:57.430 },{ 00:26:57.430 "params": { 00:26:57.430 "name": "Nvme2", 00:26:57.430 "trtype": "tcp", 00:26:57.430 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme3", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme4", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme5", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme6", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme7", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme8", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme9", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 },{ 00:26:57.431 "params": { 00:26:57.431 "name": "Nvme10", 00:26:57.431 "trtype": "tcp", 00:26:57.431 "traddr": "10.0.0.2", 00:26:57.431 "adrfam": "ipv4", 00:26:57.431 "trsvcid": "4420", 00:26:57.431 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:57.431 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:57.431 "hdgst": false, 00:26:57.431 "ddgst": false 00:26:57.431 }, 00:26:57.431 "method": "bdev_nvme_attach_controller" 00:26:57.431 }' 00:26:57.431 [2024-07-12 13:34:54.715006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:26:57.431 [2024-07-12 13:34:54.715077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:57.431 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.431 [2024-07-12 13:34:54.749732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.431 [2024-07-12 13:34:54.778452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.431 [2024-07-12 13:34:54.865652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3657883 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:59.329 13:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:00.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3657883 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3657772 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.262 "trtype": "$TEST_TRANSPORT", 00:27:00.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.262 "adrfam": "ipv4", 00:27:00.262 "trsvcid": "$NVMF_PORT", 00:27:00.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.262 "hdgst": ${hdgst:-false}, 00:27:00.262 "ddgst": ${ddgst:-false} 00:27:00.262 }, 00:27:00.262 "method": "bdev_nvme_attach_controller" 00:27:00.262 } 00:27:00.262 EOF 00:27:00.262 )") 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.262 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.262 { 00:27:00.262 "params": { 00:27:00.262 "name": "Nvme$subsystem", 00:27:00.263 "trtype": "$TEST_TRANSPORT", 00:27:00.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "$NVMF_PORT", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.263 "hdgst": ${hdgst:-false}, 00:27:00.263 "ddgst": ${ddgst:-false} 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 } 00:27:00.263 EOF 00:27:00.263 )") 00:27:00.263 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:00.263 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:00.263 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:00.263 13:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme1", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme2", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme3", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme4", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme5", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme6", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme7", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme8", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme9", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 },{ 00:27:00.263 "params": { 00:27:00.263 "name": "Nvme10", 00:27:00.263 "trtype": "tcp", 00:27:00.263 "traddr": "10.0.0.2", 00:27:00.263 "adrfam": "ipv4", 00:27:00.263 "trsvcid": "4420", 00:27:00.263 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:00.263 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:00.263 "hdgst": false, 00:27:00.263 "ddgst": false 00:27:00.263 }, 00:27:00.263 "method": "bdev_nvme_attach_controller" 00:27:00.263 }' 00:27:00.521 [2024-07-12 13:34:57.740966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:00.521 [2024-07-12 13:34:57.741052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658301 ] 00:27:00.521 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.521 [2024-07-12 13:34:57.777658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:00.521 [2024-07-12 13:34:57.807021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.521 [2024-07-12 13:34:57.897672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.929 Running I/O for 1 seconds... 00:27:03.299 00:27:03.299 Latency(us) 00:27:03.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme1n1 : 1.07 238.84 14.93 0.00 0.00 265105.45 18738.44 236123.78 00:27:03.299 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme2n1 : 1.04 185.29 11.58 0.00 0.00 335681.61 22233.69 270299.59 00:27:03.299 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme3n1 : 1.13 226.05 14.13 0.00 0.00 271219.48 16019.91 262532.36 00:27:03.299 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme4n1 : 1.17 272.79 17.05 0.00 0.00 221317.80 18058.81 270299.59 00:27:03.299 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme5n1 : 1.14 224.01 14.00 0.00 0.00 264473.98 20874.43 254765.13 00:27:03.299 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme6n1 : 1.18 216.81 13.55 0.00 0.00 269352.01 23010.42 278066.82 00:27:03.299 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme7n1 : 1.15 279.34 17.46 0.00 0.00 204821.09 17670.45 226803.11 00:27:03.299 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme8n1 : 1.19 269.64 16.85 0.00 0.00 209594.60 16214.09 257872.02 00:27:03.299 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme9n1 : 1.20 266.06 16.63 0.00 0.00 209305.90 13592.65 243891.01 00:27:03.299 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.299 Verification LBA range: start 0x0 length 0x400 00:27:03.299 Nvme10n1 : 1.19 214.87 13.43 0.00 0.00 254407.30 22233.69 298261.62 00:27:03.299 =================================================================================================================== 00:27:03.299 Total : 2393.71 149.61 0.00 0.00 244894.76 13592.65 298261.62 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.556 rmmod nvme_tcp 00:27:03.556 rmmod nvme_fabrics 00:27:03.556 rmmod nvme_keyring 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3657772 ']' 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3657772 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3657772 ']' 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3657772 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3657772 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3657772' 00:27:03.556 killing process with pid 3657772 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3657772 00:27:03.556 13:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3657772 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.120 13:35:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.022 00:27:06.022 real 0m11.903s 00:27:06.022 user 0m34.049s 00:27:06.022 sys 0m3.320s 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.022 ************************************ 00:27:06.022 END TEST nvmf_shutdown_tc1 00:27:06.022 ************************************ 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.022 ************************************ 00:27:06.022 START TEST nvmf_shutdown_tc2 00:27:06.022 ************************************ 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:06.022 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:06.022 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:06.022 Found net devices under 0000:09:00.0: cvl_0_0 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:06.022 Found net devices under 0000:09:00.1: cvl_0_1 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.022 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.279 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:27:06.280 00:27:06.280 --- 10.0.0.2 ping statistics --- 00:27:06.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.280 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:06.280 00:27:06.280 --- 10.0.0.1 ping statistics --- 00:27:06.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.280 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3659062 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3659062 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3659062 ']' 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.280 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.280 [2024-07-12 13:35:03.684450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:06.280 [2024-07-12 13:35:03.684544] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.280 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.280 [2024-07-12 13:35:03.723658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:06.280 [2024-07-12 13:35:03.750077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.537 [2024-07-12 13:35:03.839094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.537 [2024-07-12 13:35:03.839163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.537 [2024-07-12 13:35:03.839175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.537 [2024-07-12 13:35:03.839186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.537 [2024-07-12 13:35:03.839210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.537 [2024-07-12 13:35:03.839304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.537 [2024-07-12 13:35:03.839437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.537 [2024-07-12 13:35:03.839492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.537 [2024-07-12 13:35:03.839489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.537 13:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.537 [2024-07-12 13:35:03.995094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.537 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.794 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.794 Malloc1 00:27:06.794 [2024-07-12 13:35:04.074203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.794 Malloc2 00:27:06.794 Malloc3 00:27:06.794 Malloc4 00:27:06.794 Malloc5 00:27:07.052 Malloc6 00:27:07.052 Malloc7 00:27:07.052 Malloc8 00:27:07.052 Malloc9 00:27:07.052 Malloc10 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3659238 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3659238 /var/tmp/bdevperf.sock 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3659238 ']' 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.052 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.052 { 00:27:07.052 "params": { 00:27:07.052 "name": "Nvme$subsystem", 00:27:07.052 "trtype": "$TEST_TRANSPORT", 00:27:07.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.052 "adrfam": "ipv4", 00:27:07.052 "trsvcid": "$NVMF_PORT", 00:27:07.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.053 "hdgst": ${hdgst:-false}, 00:27:07.053 "ddgst": ${ddgst:-false} 00:27:07.053 }, 00:27:07.053 "method": "bdev_nvme_attach_controller" 00:27:07.053 } 00:27:07.053 EOF 00:27:07.053 )") 00:27:07.053 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.053 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.053 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.053 { 00:27:07.053 "params": { 00:27:07.053 "name": "Nvme$subsystem", 00:27:07.053 "trtype": "$TEST_TRANSPORT", 00:27:07.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.053 "adrfam": "ipv4", 00:27:07.053 "trsvcid": "$NVMF_PORT", 00:27:07.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.053 "hdgst": ${hdgst:-false}, 00:27:07.053 "ddgst": ${ddgst:-false} 00:27:07.053 }, 00:27:07.053 "method": "bdev_nvme_attach_controller" 00:27:07.053 } 00:27:07.053 EOF 00:27:07.053 )") 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.311 { 00:27:07.311 "params": { 00:27:07.311 "name": "Nvme$subsystem", 00:27:07.311 "trtype": "$TEST_TRANSPORT", 00:27:07.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.311 "adrfam": "ipv4", 00:27:07.311 "trsvcid": "$NVMF_PORT", 00:27:07.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.311 "hdgst": ${hdgst:-false}, 00:27:07.311 "ddgst": ${ddgst:-false} 00:27:07.311 }, 00:27:07.311 "method": "bdev_nvme_attach_controller" 00:27:07.311 } 00:27:07.311 EOF 00:27:07.311 )") 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.311 { 00:27:07.311 "params": { 00:27:07.311 "name": "Nvme$subsystem", 00:27:07.311 "trtype": "$TEST_TRANSPORT", 00:27:07.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.311 "adrfam": "ipv4", 00:27:07.311 "trsvcid": "$NVMF_PORT", 00:27:07.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.311 "hdgst": ${hdgst:-false}, 00:27:07.311 "ddgst": ${ddgst:-false} 00:27:07.311 }, 00:27:07.311 "method": "bdev_nvme_attach_controller" 00:27:07.311 } 00:27:07.311 EOF 00:27:07.311 )") 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.311 { 00:27:07.311 "params": { 00:27:07.311 "name": "Nvme$subsystem", 00:27:07.311 "trtype": "$TEST_TRANSPORT", 00:27:07.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.311 "adrfam": "ipv4", 00:27:07.311 "trsvcid": "$NVMF_PORT", 00:27:07.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.311 "hdgst": ${hdgst:-false}, 00:27:07.311 "ddgst": ${ddgst:-false} 00:27:07.311 }, 00:27:07.311 "method": "bdev_nvme_attach_controller" 00:27:07.311 } 00:27:07.311 EOF 00:27:07.311 )") 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.311 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.311 { 00:27:07.311 "params": { 00:27:07.311 "name": "Nvme$subsystem", 00:27:07.311 "trtype": "$TEST_TRANSPORT", 00:27:07.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.311 "adrfam": "ipv4", 00:27:07.311 "trsvcid": "$NVMF_PORT", 00:27:07.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.311 "hdgst": ${hdgst:-false}, 00:27:07.312 "ddgst": ${ddgst:-false} 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 } 00:27:07.312 EOF 00:27:07.312 )") 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.312 { 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme$subsystem", 00:27:07.312 "trtype": "$TEST_TRANSPORT", 00:27:07.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "$NVMF_PORT", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.312 "hdgst": ${hdgst:-false}, 00:27:07.312 "ddgst": ${ddgst:-false} 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 } 00:27:07.312 EOF 00:27:07.312 )") 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.312 { 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme$subsystem", 00:27:07.312 "trtype": "$TEST_TRANSPORT", 00:27:07.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "$NVMF_PORT", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.312 "hdgst": ${hdgst:-false}, 00:27:07.312 "ddgst": ${ddgst:-false} 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 } 00:27:07.312 EOF 00:27:07.312 )") 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.312 { 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme$subsystem", 00:27:07.312 "trtype": "$TEST_TRANSPORT", 00:27:07.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "$NVMF_PORT", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.312 "hdgst": ${hdgst:-false}, 00:27:07.312 "ddgst": ${ddgst:-false} 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 } 00:27:07.312 EOF 00:27:07.312 )") 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.312 { 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme$subsystem", 00:27:07.312 "trtype": "$TEST_TRANSPORT", 00:27:07.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "$NVMF_PORT", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.312 "hdgst": ${hdgst:-false}, 00:27:07.312 "ddgst": ${ddgst:-false} 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 } 00:27:07.312 EOF 00:27:07.312 )") 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:07.312 13:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme1", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme2", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme3", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme4", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme5", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme6", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme7", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme8", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme9", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 },{ 00:27:07.312 "params": { 00:27:07.312 "name": "Nvme10", 00:27:07.312 "trtype": "tcp", 00:27:07.312 "traddr": "10.0.0.2", 00:27:07.312 "adrfam": "ipv4", 00:27:07.312 "trsvcid": "4420", 00:27:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:07.312 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:07.312 "hdgst": false, 00:27:07.312 "ddgst": false 00:27:07.312 }, 00:27:07.312 "method": "bdev_nvme_attach_controller" 00:27:07.312 }' 00:27:07.312 [2024-07-12 13:35:04.561030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:07.312 [2024-07-12 13:35:04.561105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659238 ] 00:27:07.312 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.312 [2024-07-12 13:35:04.596562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:07.312 [2024-07-12 13:35:04.625855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.312 [2024-07-12 13:35:04.712941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.209 Running I/O for 10 seconds... 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:09.209 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:09.468 13:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3659238 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3659238 ']' 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3659238 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3659238 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3659238' 00:27:09.726 killing process with pid 3659238 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3659238 00:27:09.726 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3659238 00:27:09.984 Received shutdown signal, test time was about 1.080656 seconds 00:27:09.984 00:27:09.984 Latency(us) 00:27:09.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.984 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme1n1 : 1.07 238.74 14.92 0.00 0.00 264535.61 19320.98 256318.58 00:27:09.984 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme2n1 : 1.06 246.04 15.38 0.00 0.00 248823.20 10631.40 245444.46 00:27:09.984 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme3n1 : 1.05 243.19 15.20 0.00 0.00 247889.16 19126.80 248551.35 00:27:09.984 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme4n1 : 1.06 240.75 15.05 0.00 0.00 245180.49 16602.45 259425.47 00:27:09.984 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme5n1 : 1.04 185.42 11.59 0.00 0.00 309972.64 21845.33 257872.02 00:27:09.984 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme6n1 : 1.08 237.07 14.82 0.00 0.00 236893.87 18350.08 259425.47 00:27:09.984 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme7n1 : 1.07 239.42 14.96 0.00 0.00 229287.06 20680.25 256318.58 00:27:09.984 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme8n1 : 1.08 237.71 14.86 0.00 0.00 225765.26 20583.16 256318.58 00:27:09.984 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme9n1 : 1.05 182.96 11.44 0.00 0.00 284458.86 23884.23 293601.28 00:27:09.984 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.984 Verification LBA range: start 0x0 length 0x400 00:27:09.984 Nvme10n1 : 1.05 182.80 11.43 0.00 0.00 275398.86 20874.43 270299.59 00:27:09.984 =================================================================================================================== 00:27:09.984 Total : 2234.12 139.63 0.00 0.00 254125.90 10631.40 293601.28 00:27:10.243 13:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:11.174 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3659062 00:27:11.174 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:11.174 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:11.174 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.175 rmmod nvme_tcp 00:27:11.175 rmmod nvme_fabrics 00:27:11.175 rmmod nvme_keyring 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3659062 ']' 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3659062 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3659062 ']' 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3659062 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3659062 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3659062' 00:27:11.175 killing process with pid 3659062 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3659062 00:27:11.175 13:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3659062 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.740 13:35:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.271 00:27:14.271 real 0m7.710s 00:27:14.271 user 0m23.428s 00:27:14.271 sys 0m1.488s 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.271 ************************************ 00:27:14.271 END TEST nvmf_shutdown_tc2 00:27:14.271 ************************************ 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.271 ************************************ 00:27:14.271 START TEST nvmf_shutdown_tc3 00:27:14.271 ************************************ 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.271 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:14.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:14.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:14.272 Found net devices under 0000:09:00.0: cvl_0_0 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:14.272 Found net devices under 0000:09:00.1: cvl_0_1 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.272 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:14.273 00:27:14.273 --- 10.0.0.2 ping statistics --- 00:27:14.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.273 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:14.273 00:27:14.273 --- 10.0.0.1 ping statistics --- 00:27:14.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.273 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3660147 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3660147 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3660147 ']' 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.273 [2024-07-12 13:35:11.448178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:14.273 [2024-07-12 13:35:11.448262] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.273 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.273 [2024-07-12 13:35:11.485364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:14.273 [2024-07-12 13:35:11.511433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.273 [2024-07-12 13:35:11.592908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.273 [2024-07-12 13:35:11.592958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.273 [2024-07-12 13:35:11.592985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.273 [2024-07-12 13:35:11.592997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.273 [2024-07-12 13:35:11.593006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.273 [2024-07-12 13:35:11.593087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.273 [2024-07-12 13:35:11.593192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.273 [2024-07-12 13:35:11.593271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.273 [2024-07-12 13:35:11.593273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.273 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.531 [2024-07-12 13:35:11.746073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.531 13:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.531 Malloc1 00:27:14.531 [2024-07-12 13:35:11.822110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.531 Malloc2 00:27:14.531 Malloc3 00:27:14.531 Malloc4 00:27:14.531 Malloc5 00:27:14.788 Malloc6 00:27:14.788 Malloc7 00:27:14.788 Malloc8 00:27:14.788 Malloc9 00:27:14.788 Malloc10 00:27:14.788 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.788 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:14.788 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.788 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3660323 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3660323 /var/tmp/bdevperf.sock 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3660323 ']' 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:15.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.054 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.054 { 00:27:15.054 "params": { 00:27:15.054 "name": "Nvme$subsystem", 00:27:15.054 "trtype": "$TEST_TRANSPORT", 00:27:15.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.054 "adrfam": "ipv4", 00:27:15.054 "trsvcid": "$NVMF_PORT", 00:27:15.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.054 "hdgst": ${hdgst:-false}, 00:27:15.054 "ddgst": ${ddgst:-false} 00:27:15.054 }, 00:27:15.054 "method": "bdev_nvme_attach_controller" 00:27:15.054 } 00:27:15.054 EOF 00:27:15.054 )") 00:27:15.055 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:15.055 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:15.055 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:15.055 13:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme1", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme2", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme3", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme4", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme5", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme6", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme7", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme8", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme9", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 },{ 00:27:15.055 "params": { 00:27:15.055 "name": "Nvme10", 00:27:15.055 "trtype": "tcp", 00:27:15.055 "traddr": "10.0.0.2", 00:27:15.055 "adrfam": "ipv4", 00:27:15.055 "trsvcid": "4420", 00:27:15.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:15.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:15.055 "hdgst": false, 00:27:15.055 "ddgst": false 00:27:15.055 }, 00:27:15.055 "method": "bdev_nvme_attach_controller" 00:27:15.055 }' 00:27:15.055 [2024-07-12 13:35:12.317917] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:15.055 [2024-07-12 13:35:12.317994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660323 ] 00:27:15.055 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.055 [2024-07-12 13:35:12.353221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:15.055 [2024-07-12 13:35:12.382211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.055 [2024-07-12 13:35:12.468609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.948 Running I/O for 10 seconds... 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:16.948 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:17.205 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3660147 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3660147 ']' 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3660147 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:17.462 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3660147 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3660147' 00:27:17.743 killing process with pid 3660147 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3660147 00:27:17.743 13:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3660147 00:27:17.743 [2024-07-12 13:35:14.960449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.960989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.961290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4aa0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.962501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc74a0 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.963296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.743 [2024-07-12 13:35:14.963329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.963988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.964100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4f40 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.965490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc53e0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.744 [2024-07-12 13:35:14.966581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.966900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc58a0 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.967995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.745 [2024-07-12 13:35:14.968346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.968467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc5d40 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.969994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.970311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6200 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.746 [2024-07-12 13:35:14.971662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.971988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.972276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc66a0 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.747 [2024-07-12 13:35:14.973462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.973988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.974136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc6b60 is same with the state(5) to be set 00:27:17.748 [2024-07-12 13:35:14.978413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.748 [2024-07-12 13:35:14.978753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.748 [2024-07-12 13:35:14.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.978973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.978986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.749 [2024-07-12 13:35:14.979922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.749 [2024-07-12 13:35:14.979937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.979951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.979966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.979979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.979995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.750 [2024-07-12 13:35:14.980372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.750 [2024-07-12 13:35:14.980495] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1532f80 was disconnected and freed. reset controller. 00:27:17.750 [2024-07-12 13:35:14.980673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf320 is same with the state(5) to be set 00:27:17.750 [2024-07-12 13:35:14.980842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.980949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.980961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bfe80 is same with the state(5) to be set 00:27:17.750 [2024-07-12 13:35:14.981007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be600 is same with the state(5) to be set 00:27:17.750 [2024-07-12 13:35:14.981161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.750 [2024-07-12 13:35:14.981261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.750 [2024-07-12 13:35:14.981274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeea610 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.981332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1437010 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.981496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431740 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.981661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0a90 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.981825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.981929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.981942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143db40 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.981987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f4f10 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.982147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.751 [2024-07-12 13:35:14.982297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8140 is same with the state(5) to be set 00:27:17.751 [2024-07-12 13:35:14.982543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.751 [2024-07-12 13:35:14.982969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.751 [2024-07-12 13:35:14.982985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.982998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.752 [2024-07-12 13:35:14.983959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.752 [2024-07-12 13:35:14.983974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.983988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.984495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.984581] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x152e7c0 was disconnected and freed. reset controller. 00:27:17.753 [2024-07-12 13:35:14.989281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:17.753 [2024-07-12 13:35:14.989339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:17.753 [2024-07-12 13:35:14.989381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8140 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.989405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be600 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.990818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1534420 was disconnected and freed. reset controller. 00:27:17.753 [2024-07-12 13:35:14.990917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bf320 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.990958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bfe80 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.990987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea610 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1437010 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431740 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0a90 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143db40 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f4f10 (9): Bad file descriptor 00:27:17.753 [2024-07-12 13:35:14.991627] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.991713] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.991789] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.991877] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.992176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:17.753 [2024-07-12 13:35:14.992373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.753 [2024-07-12 13:35:14.992403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be600 with addr=10.0.0.2, port=4420 00:27:17.753 [2024-07-12 13:35:14.992419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be600 is same with the state(5) to be set 00:27:17.753 [2024-07-12 13:35:14.992543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.753 [2024-07-12 13:35:14.992578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8140 with addr=10.0.0.2, port=4420 00:27:17.753 [2024-07-12 13:35:14.992594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8140 is same with the state(5) to be set 00:27:17.753 [2024-07-12 13:35:14.992656] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.992724] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:17.753 [2024-07-12 13:35:14.992781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.992976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.992990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.753 [2024-07-12 13:35:14.993182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.753 [2024-07-12 13:35:14.993195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.993976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.993990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.754 [2024-07-12 13:35:14.994479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.754 [2024-07-12 13:35:14.994494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef640 is same with the state(5) to be set 00:27:17.755 [2024-07-12 13:35:14.994833] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13ef640 was disconnected and freed. reset controller. 00:27:17.755 [2024-07-12 13:35:14.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.994973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.994990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.755 [2024-07-12 13:35:14.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.755 [2024-07-12 13:35:14.995943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.995963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.995992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.756 [2024-07-12 13:35:14.996848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.756 [2024-07-12 13:35:14.996862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15307d0 is same with the state(5) to be set 00:27:17.756 [2024-07-12 13:35:14.996943] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15307d0 was disconnected and freed. reset controller. 00:27:17.756 [2024-07-12 13:35:14.997144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.756 [2024-07-12 13:35:14.997171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bfe80 with addr=10.0.0.2, port=4420 00:27:17.756 [2024-07-12 13:35:14.997187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bfe80 is same with the state(5) to be set 00:27:17.756 [2024-07-12 13:35:14.997208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be600 (9): Bad file descriptor 00:27:17.756 [2024-07-12 13:35:14.997227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8140 (9): Bad file descriptor 00:27:17.756 [2024-07-12 13:35:14.999599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:17.756 [2024-07-12 13:35:14.999639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:17.756 [2024-07-12 13:35:14.999680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bfe80 (9): Bad file descriptor 00:27:17.756 [2024-07-12 13:35:14.999700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:17.756 [2024-07-12 13:35:14.999714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:17.756 [2024-07-12 13:35:14.999728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:17.756 [2024-07-12 13:35:14.999749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:17.756 [2024-07-12 13:35:14.999763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:17.757 [2024-07-12 13:35:14.999776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:17.757 [2024-07-12 13:35:14.999853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.757 [2024-07-12 13:35:14.999879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.757 [2024-07-12 13:35:15.000005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.757 [2024-07-12 13:35:15.000033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1431740 with addr=10.0.0.2, port=4420 00:27:17.757 [2024-07-12 13:35:15.000049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431740 is same with the state(5) to be set 00:27:17.757 [2024-07-12 13:35:15.000170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.757 [2024-07-12 13:35:15.000195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143db40 with addr=10.0.0.2, port=4420 00:27:17.757 [2024-07-12 13:35:15.000210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143db40 is same with the state(5) to be set 00:27:17.757 [2024-07-12 13:35:15.000225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:17.757 [2024-07-12 13:35:15.000238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:17.757 [2024-07-12 13:35:15.000252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:17.757 [2024-07-12 13:35:15.000837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.757 [2024-07-12 13:35:15.000863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431740 (9): Bad file descriptor 00:27:17.757 [2024-07-12 13:35:15.000883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143db40 (9): Bad file descriptor 00:27:17.757 [2024-07-12 13:35:15.000945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:17.757 [2024-07-12 13:35:15.000964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:17.757 [2024-07-12 13:35:15.000978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:17.757 [2024-07-12 13:35:15.000996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:17.757 [2024-07-12 13:35:15.001011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:17.757 [2024-07-12 13:35:15.001024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:17.757 [2024-07-12 13:35:15.001138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.757 [2024-07-12 13:35:15.001159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.757 [2024-07-12 13:35:15.001239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.001995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.757 [2024-07-12 13:35:15.002259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.757 [2024-07-12 13:35:15.002272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.002976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.002990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.003223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.003237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae940 is same with the state(5) to be set 00:27:17.758 [2024-07-12 13:35:15.004516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.758 [2024-07-12 13:35:15.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.758 [2024-07-12 13:35:15.004828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.004857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.004886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.004944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.004978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.004991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.005979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.005993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.006008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.006021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.006036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.006049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.759 [2024-07-12 13:35:15.006065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.759 [2024-07-12 13:35:15.006082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.006407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.006420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d350 is same with the state(5) to be set 00:27:17.760 [2024-07-12 13:35:15.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.007983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.007997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.760 [2024-07-12 13:35:15.008471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.760 [2024-07-12 13:35:15.008484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.008989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.009561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.009575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f0ab0 is same with the state(5) to be set 00:27:17.761 [2024-07-12 13:35:15.010817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.010839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.010860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.010875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.761 [2024-07-12 13:35:15.010906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.761 [2024-07-12 13:35:15.010922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.010936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.010951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.010979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.010993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.011976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.011991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.762 [2024-07-12 13:35:15.012228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.762 [2024-07-12 13:35:15.012244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.012775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.012788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f380 is same with the state(5) to be set 00:27:17.763 [2024-07-12 13:35:15.014036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.763 [2024-07-12 13:35:15.014765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.763 [2024-07-12 13:35:15.014778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.014979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.014995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.764 [2024-07-12 13:35:15.015931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.764 [2024-07-12 13:35:15.015945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1531c80 is same with the state(5) to be set 00:27:17.764 [2024-07-12 13:35:15.017599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:17.764 [2024-07-12 13:35:15.017633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.764 [2024-07-12 13:35:15.017650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:17.764 [2024-07-12 13:35:15.017667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:17.764 [2024-07-12 13:35:15.017792] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.764 [2024-07-12 13:35:15.017818] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.764 [2024-07-12 13:35:15.017842] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.764 [2024-07-12 13:35:15.017934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:17.765 task offset: 22784 on job bdev=Nvme9n1 fails 00:27:17.765 00:27:17.765 Latency(us) 00:27:17.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme1n1 ended in about 0.92 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme1n1 : 0.92 139.53 8.72 69.76 0.00 302630.68 26796.94 290494.39 00:27:17.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme2n1 : 0.92 143.39 8.96 69.52 0.00 291612.66 18155.90 285834.05 00:27:17.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme3n1 ended in about 0.90 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme3n1 : 0.90 213.21 13.33 71.07 0.00 213801.91 12621.75 276513.37 00:27:17.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme4n1 ended in about 0.91 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme4n1 : 0.91 153.60 9.60 70.22 0.00 266300.80 20000.62 254765.13 00:27:17.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme5n1 ended in about 0.92 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme5n1 : 0.92 138.57 8.66 69.29 0.00 281019.99 20583.16 260978.92 00:27:17.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme6n1 ended in about 0.93 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme6n1 : 0.93 138.10 8.63 69.05 0.00 276154.09 24078.41 271853.04 00:27:17.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme7n1 ended in about 0.91 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme7n1 : 0.91 144.64 9.04 70.13 0.00 259968.88 22233.69 268746.15 00:27:17.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme8n1 ended in about 0.93 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme8n1 : 0.93 137.63 8.60 68.81 0.00 265286.80 19515.16 282727.16 00:27:17.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Job: Nvme9n1 ended in about 0.90 seconds with error 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme9n1 : 0.90 142.35 8.90 71.18 0.00 249278.64 6941.96 315349.52 00:27:17.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.765 Verification LBA range: start 0x0 length 0x400 00:27:17.765 Nvme10n1 : 0.90 212.79 13.30 0.00 0.00 244429.56 36894.34 234570.33 00:27:17.765 =================================================================================================================== 00:27:17.765 Total : 1563.81 97.74 629.03 0.00 263462.17 6941.96 315349.52 00:27:17.765 [2024-07-12 13:35:15.044533] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:17.765 [2024-07-12 13:35:15.044616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:17.765 [2024-07-12 13:35:15.044651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:17.765 [2024-07-12 13:35:15.044993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.045030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8140 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.045050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8140 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.045193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.045220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f4f10 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.045236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f4f10 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.045387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.045414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0a90 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.045430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0a90 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.045566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.045593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1437010 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.045608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1437010 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.046970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:17.765 [2024-07-12 13:35:15.047010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:17.765 [2024-07-12 13:35:15.047029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:17.765 [2024-07-12 13:35:15.047216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.047244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeea610 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.047260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeea610 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.047411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.047437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bf320 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.047452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf320 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.047582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.047607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15be600 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.047622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be600 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.047647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8140 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.047669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f4f10 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.047686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0a90 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.047703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1437010 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.047759] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.765 [2024-07-12 13:35:15.047781] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.765 [2024-07-12 13:35:15.047799] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.765 [2024-07-12 13:35:15.047819] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:17.765 [2024-07-12 13:35:15.048036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.048064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bfe80 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.048080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bfe80 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.048204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.048229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143db40 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.048243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143db40 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.048368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.765 [2024-07-12 13:35:15.048394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1431740 with addr=10.0.0.2, port=4420 00:27:17.765 [2024-07-12 13:35:15.048408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431740 is same with the state(5) to be set 00:27:17.765 [2024-07-12 13:35:15.048426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea610 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.048449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bf320 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.048467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15be600 (9): Bad file descriptor 00:27:17.765 [2024-07-12 13:35:15.048483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:17.765 [2024-07-12 13:35:15.048495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:17.765 [2024-07-12 13:35:15.048510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:17.765 [2024-07-12 13:35:15.048530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.765 [2024-07-12 13:35:15.048543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.765 [2024-07-12 13:35:15.048556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.765 [2024-07-12 13:35:15.048572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:17.765 [2024-07-12 13:35:15.048586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:17.765 [2024-07-12 13:35:15.048598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:17.765 [2024-07-12 13:35:15.048613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.048626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.048638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:17.766 [2024-07-12 13:35:15.048732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.048754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.048766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.048777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.048793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bfe80 (9): Bad file descriptor 00:27:17.766 [2024-07-12 13:35:15.048811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143db40 (9): Bad file descriptor 00:27:17.766 [2024-07-12 13:35:15.048828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431740 (9): Bad file descriptor 00:27:17.766 [2024-07-12 13:35:15.048843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.048854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.048866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:17.766 [2024-07-12 13:35:15.048883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.048896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.048908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:17.766 [2024-07-12 13:35:15.048923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.048936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.048948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:17.766 [2024-07-12 13:35:15.048984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.049006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.049018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.049030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.049042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.049055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:17.766 [2024-07-12 13:35:15.049073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.049087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.049099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:17.766 [2024-07-12 13:35:15.049115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:17.766 [2024-07-12 13:35:15.049128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:17.766 [2024-07-12 13:35:15.049140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:17.766 [2024-07-12 13:35:15.049181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.049198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.766 [2024-07-12 13:35:15.049210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:18.330 13:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:18.330 13:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3660323 00:27:19.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3660323) - No such process 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.265 rmmod nvme_tcp 00:27:19.265 rmmod nvme_fabrics 00:27:19.265 rmmod nvme_keyring 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.265 13:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.169 13:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.427 00:27:21.427 real 0m7.436s 00:27:21.427 user 0m18.033s 00:27:21.427 sys 0m1.448s 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.427 ************************************ 00:27:21.427 END TEST nvmf_shutdown_tc3 00:27:21.427 ************************************ 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:21.427 00:27:21.427 real 0m27.280s 00:27:21.427 user 1m15.618s 00:27:21.427 sys 0m6.392s 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:21.427 ************************************ 00:27:21.427 END TEST nvmf_shutdown 00:27:21.427 ************************************ 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:21.427 13:35:18 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.427 13:35:18 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.427 13:35:18 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:21.427 13:35:18 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.427 ************************************ 00:27:21.427 START TEST nvmf_multicontroller 00:27:21.427 ************************************ 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:21.427 * Looking for test storage... 00:27:21.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.427 13:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.957 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:23.958 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:23.958 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:23.958 Found net devices under 0000:09:00.0: cvl_0_0 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:23.958 Found net devices under 0000:09:00.1: cvl_0_1 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.958 13:35:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:23.958 00:27:23.958 --- 10.0.0.2 ping statistics --- 00:27:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.958 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:27:23.958 00:27:23.958 --- 10.0.0.1 ping statistics --- 00:27:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.958 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3662738 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3662738 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3662738 ']' 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.958 [2024-07-12 13:35:21.089976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:23.958 [2024-07-12 13:35:21.090046] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.958 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.958 [2024-07-12 13:35:21.130749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:23.958 [2024-07-12 13:35:21.157242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.958 [2024-07-12 13:35:21.237721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.958 [2024-07-12 13:35:21.237773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.958 [2024-07-12 13:35:21.237801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.958 [2024-07-12 13:35:21.237812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.958 [2024-07-12 13:35:21.237821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.958 [2024-07-12 13:35:21.237906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.958 [2024-07-12 13:35:21.238016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.958 [2024-07-12 13:35:21.238019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.958 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.959 [2024-07-12 13:35:21.381502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.959 Malloc0 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.959 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 [2024-07-12 13:35:21.439475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 [2024-07-12 13:35:21.447368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 Malloc1 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:24.216 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3662861 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3662861 /var/tmp/bdevperf.sock 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3662861 ']' 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:24.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.217 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.473 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.473 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:24.473 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:24.473 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.473 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 NVMe0n1 00:27:24.731 13:35:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.731 13:35:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.731 1 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 request: 00:27:24.731 { 00:27:24.731 "name": "NVMe0", 00:27:24.731 "trtype": "tcp", 00:27:24.731 "traddr": "10.0.0.2", 00:27:24.731 "adrfam": "ipv4", 00:27:24.731 "trsvcid": "4420", 00:27:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.731 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:24.731 "hostaddr": "10.0.0.2", 00:27:24.731 "hostsvcid": "60000", 00:27:24.731 "prchk_reftag": false, 00:27:24.731 "prchk_guard": false, 00:27:24.731 "hdgst": false, 00:27:24.731 "ddgst": false, 00:27:24.731 "method": "bdev_nvme_attach_controller", 00:27:24.731 "req_id": 1 00:27:24.731 } 00:27:24.731 Got JSON-RPC error response 00:27:24.731 response: 00:27:24.731 { 00:27:24.731 "code": -114, 00:27:24.731 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:24.731 } 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 request: 00:27:24.731 { 00:27:24.731 "name": "NVMe0", 00:27:24.731 "trtype": "tcp", 00:27:24.731 "traddr": "10.0.0.2", 00:27:24.731 "adrfam": "ipv4", 00:27:24.731 "trsvcid": "4420", 00:27:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:24.731 "hostaddr": "10.0.0.2", 00:27:24.731 "hostsvcid": "60000", 00:27:24.731 "prchk_reftag": false, 00:27:24.731 "prchk_guard": false, 00:27:24.731 "hdgst": false, 00:27:24.731 "ddgst": false, 00:27:24.731 "method": "bdev_nvme_attach_controller", 00:27:24.731 "req_id": 1 00:27:24.731 } 00:27:24.731 Got JSON-RPC error response 00:27:24.731 response: 00:27:24.731 { 00:27:24.731 "code": -114, 00:27:24.731 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:24.731 } 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 request: 00:27:24.731 { 00:27:24.731 "name": "NVMe0", 00:27:24.731 "trtype": "tcp", 00:27:24.731 "traddr": "10.0.0.2", 00:27:24.731 "adrfam": "ipv4", 00:27:24.731 "trsvcid": "4420", 00:27:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.731 "hostaddr": "10.0.0.2", 00:27:24.731 "hostsvcid": "60000", 00:27:24.731 "prchk_reftag": false, 00:27:24.731 "prchk_guard": false, 00:27:24.731 "hdgst": false, 00:27:24.731 "ddgst": false, 00:27:24.731 "multipath": "disable", 00:27:24.731 "method": "bdev_nvme_attach_controller", 00:27:24.731 "req_id": 1 00:27:24.731 } 00:27:24.731 Got JSON-RPC error response 00:27:24.731 response: 00:27:24.731 { 00:27:24.731 "code": -114, 00:27:24.731 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:24.731 } 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.731 request: 00:27:24.731 { 00:27:24.731 "name": "NVMe0", 00:27:24.731 "trtype": "tcp", 00:27:24.731 "traddr": "10.0.0.2", 00:27:24.731 "adrfam": "ipv4", 00:27:24.731 "trsvcid": "4420", 00:27:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.731 "hostaddr": "10.0.0.2", 00:27:24.731 "hostsvcid": "60000", 00:27:24.731 "prchk_reftag": false, 00:27:24.731 "prchk_guard": false, 00:27:24.731 "hdgst": false, 00:27:24.731 "ddgst": false, 00:27:24.731 "multipath": "failover", 00:27:24.731 "method": "bdev_nvme_attach_controller", 00:27:24.731 "req_id": 1 00:27:24.731 } 00:27:24.731 Got JSON-RPC error response 00:27:24.731 response: 00:27:24.731 { 00:27:24.731 "code": -114, 00:27:24.731 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:24.731 } 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.731 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.732 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.732 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.989 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:24.989 13:35:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:26.359 0 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3662861 ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662861' 00:27:26.360 killing process with pid 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3662861 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:26.360 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:26.360 [2024-07-12 13:35:21.553132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:26.360 [2024-07-12 13:35:21.553228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662861 ] 00:27:26.360 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.360 [2024-07-12 13:35:21.586132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:26.360 [2024-07-12 13:35:21.614166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.360 [2024-07-12 13:35:21.701803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.360 [2024-07-12 13:35:22.360526] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name fefbb615-b10b-4f22-bfbb-0e9ebf0a3a6f already exists 00:27:26.360 [2024-07-12 13:35:22.360565] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:fefbb615-b10b-4f22-bfbb-0e9ebf0a3a6f alias for bdev NVMe1n1 00:27:26.360 [2024-07-12 13:35:22.360597] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:26.360 Running I/O for 1 seconds... 00:27:26.360 00:27:26.360 Latency(us) 00:27:26.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.360 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:26.360 NVMe0n1 : 1.00 18739.19 73.20 0.00 0.00 6820.38 1990.35 12379.02 00:27:26.360 =================================================================================================================== 00:27:26.360 Total : 18739.19 73.20 0.00 0.00 6820.38 1990.35 12379.02 00:27:26.360 Received shutdown signal, test time was about 1.000000 seconds 00:27:26.360 00:27:26.360 Latency(us) 00:27:26.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.360 =================================================================================================================== 00:27:26.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.360 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.360 rmmod nvme_tcp 00:27:26.360 rmmod nvme_fabrics 00:27:26.360 rmmod nvme_keyring 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3662738 ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3662738 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3662738 ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3662738 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.360 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662738 00:27:26.617 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:26.617 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:26.617 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662738' 00:27:26.617 killing process with pid 3662738 00:27:26.617 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3662738 00:27:26.617 13:35:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3662738 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.876 13:35:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.781 13:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.781 00:27:28.781 real 0m7.390s 00:27:28.781 user 0m11.471s 00:27:28.781 sys 0m2.323s 00:27:28.781 13:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.781 13:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.781 ************************************ 00:27:28.781 END TEST nvmf_multicontroller 00:27:28.781 ************************************ 00:27:28.781 13:35:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:28.781 13:35:26 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:28.781 13:35:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:28.781 13:35:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.781 13:35:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.781 ************************************ 00:27:28.781 START TEST nvmf_aer 00:27:28.781 ************************************ 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:28.781 * Looking for test storage... 00:27:28.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.781 13:35:26 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.782 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.039 13:35:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.937 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:30.938 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:30.938 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:30.938 Found net devices under 0000:09:00.0: cvl_0_0 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:30.938 Found net devices under 0000:09:00.1: cvl_0_1 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:27:30.938 00:27:30.938 --- 10.0.0.2 ping statistics --- 00:27:30.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.938 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:27:30.938 00:27:30.938 --- 10.0.0.1 ping statistics --- 00:27:30.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.938 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3665071 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3665071 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3665071 ']' 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.938 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.938 [2024-07-12 13:35:28.382932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:30.938 [2024-07-12 13:35:28.383016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.196 [2024-07-12 13:35:28.420174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:31.197 [2024-07-12 13:35:28.446266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.197 [2024-07-12 13:35:28.528058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.197 [2024-07-12 13:35:28.528124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.197 [2024-07-12 13:35:28.528151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.197 [2024-07-12 13:35:28.528163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.197 [2024-07-12 13:35:28.528172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.197 [2024-07-12 13:35:28.528256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.197 [2024-07-12 13:35:28.528354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.197 [2024-07-12 13:35:28.528388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.197 [2024-07-12 13:35:28.528391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.197 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.197 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:31.197 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.197 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.197 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 [2024-07-12 13:35:28.680105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 Malloc0 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 [2024-07-12 13:35:28.733426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.455 [ 00:27:31.455 { 00:27:31.455 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:31.455 "subtype": "Discovery", 00:27:31.455 "listen_addresses": [], 00:27:31.455 "allow_any_host": true, 00:27:31.455 "hosts": [] 00:27:31.455 }, 00:27:31.455 { 00:27:31.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.455 "subtype": "NVMe", 00:27:31.455 "listen_addresses": [ 00:27:31.455 { 00:27:31.455 "trtype": "TCP", 00:27:31.455 "adrfam": "IPv4", 00:27:31.455 "traddr": "10.0.0.2", 00:27:31.455 "trsvcid": "4420" 00:27:31.455 } 00:27:31.455 ], 00:27:31.455 "allow_any_host": true, 00:27:31.455 "hosts": [], 00:27:31.455 "serial_number": "SPDK00000000000001", 00:27:31.455 "model_number": "SPDK bdev Controller", 00:27:31.455 "max_namespaces": 2, 00:27:31.455 "min_cntlid": 1, 00:27:31.455 "max_cntlid": 65519, 00:27:31.455 "namespaces": [ 00:27:31.455 { 00:27:31.455 "nsid": 1, 00:27:31.455 "bdev_name": "Malloc0", 00:27:31.455 "name": "Malloc0", 00:27:31.455 "nguid": "3E536EEFE05647439393E669DFB79187", 00:27:31.455 "uuid": "3e536eef-e056-4743-9393-e669dfb79187" 00:27:31.455 } 00:27:31.455 ] 00:27:31.455 } 00:27:31.455 ] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3665099 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:31.455 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:31.455 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 Malloc1 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 Asynchronous Event Request test 00:27:31.713 Attaching to 10.0.0.2 00:27:31.713 Attached to 10.0.0.2 00:27:31.713 Registering asynchronous event callbacks... 00:27:31.713 Starting namespace attribute notice tests for all controllers... 00:27:31.713 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:31.713 aer_cb - Changed Namespace 00:27:31.713 Cleaning up... 00:27:31.713 [ 00:27:31.713 { 00:27:31.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:31.713 "subtype": "Discovery", 00:27:31.713 "listen_addresses": [], 00:27:31.713 "allow_any_host": true, 00:27:31.713 "hosts": [] 00:27:31.713 }, 00:27:31.713 { 00:27:31.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.713 "subtype": "NVMe", 00:27:31.713 "listen_addresses": [ 00:27:31.713 { 00:27:31.713 "trtype": "TCP", 00:27:31.713 "adrfam": "IPv4", 00:27:31.713 "traddr": "10.0.0.2", 00:27:31.713 "trsvcid": "4420" 00:27:31.713 } 00:27:31.713 ], 00:27:31.713 "allow_any_host": true, 00:27:31.713 "hosts": [], 00:27:31.713 "serial_number": "SPDK00000000000001", 00:27:31.713 "model_number": "SPDK bdev Controller", 00:27:31.713 "max_namespaces": 2, 00:27:31.713 "min_cntlid": 1, 00:27:31.713 "max_cntlid": 65519, 00:27:31.713 "namespaces": [ 00:27:31.713 { 00:27:31.713 "nsid": 1, 00:27:31.713 "bdev_name": "Malloc0", 00:27:31.713 "name": "Malloc0", 00:27:31.713 "nguid": "3E536EEFE05647439393E669DFB79187", 00:27:31.713 "uuid": "3e536eef-e056-4743-9393-e669dfb79187" 00:27:31.713 }, 00:27:31.713 { 00:27:31.713 "nsid": 2, 00:27:31.713 "bdev_name": "Malloc1", 00:27:31.713 "name": "Malloc1", 00:27:31.713 "nguid": "3C7972CCDDCD46C885E1006A186C28D8", 00:27:31.713 "uuid": "3c7972cc-ddcd-46c8-85e1-006a186c28d8" 00:27:31.713 } 00:27:31.713 ] 00:27:31.713 } 00:27:31.713 ] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3665099 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.713 rmmod nvme_tcp 00:27:31.713 rmmod nvme_fabrics 00:27:31.713 rmmod nvme_keyring 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3665071 ']' 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3665071 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3665071 ']' 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3665071 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3665071 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:31.713 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:31.714 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3665071' 00:27:31.714 killing process with pid 3665071 00:27:31.714 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3665071 00:27:31.714 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3665071 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.973 13:35:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.514 13:35:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.514 00:27:34.514 real 0m5.249s 00:27:34.514 user 0m4.062s 00:27:34.514 sys 0m1.900s 00:27:34.514 13:35:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.514 13:35:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.514 ************************************ 00:27:34.514 END TEST nvmf_aer 00:27:34.514 ************************************ 00:27:34.514 13:35:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:34.514 13:35:31 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:34.514 13:35:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:34.514 13:35:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.514 13:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.514 ************************************ 00:27:34.514 START TEST nvmf_async_init 00:27:34.514 ************************************ 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:34.514 * Looking for test storage... 00:27:34.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.514 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4cb8bce768da4740bc59f2ea4679b0be 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:34.515 13:35:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.413 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.413 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.413 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:36.414 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:36.414 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:36.414 Found net devices under 0000:09:00.0: cvl_0_0 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:36.414 Found net devices under 0000:09:00.1: cvl_0_1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:27:36.414 00:27:36.414 --- 10.0.0.2 ping statistics --- 00:27:36.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.414 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:36.414 00:27:36.414 --- 10.0.0.1 ping statistics --- 00:27:36.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.414 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3667148 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3667148 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3667148 ']' 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.414 13:35:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.414 [2024-07-12 13:35:33.775472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:36.414 [2024-07-12 13:35:33.775554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.414 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.414 [2024-07-12 13:35:33.811916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:36.414 [2024-07-12 13:35:33.836999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.673 [2024-07-12 13:35:33.919420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.673 [2024-07-12 13:35:33.919474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.673 [2024-07-12 13:35:33.919487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.673 [2024-07-12 13:35:33.919498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.673 [2024-07-12 13:35:33.919508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.673 [2024-07-12 13:35:33.919533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 [2024-07-12 13:35:34.048101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 null0 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4cb8bce768da4740bc59f2ea4679b0be 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.673 [2024-07-12 13:35:34.088346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.673 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.931 nvme0n1 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.931 [ 00:27:36.931 { 00:27:36.931 "name": "nvme0n1", 00:27:36.931 "aliases": [ 00:27:36.931 "4cb8bce7-68da-4740-bc59-f2ea4679b0be" 00:27:36.931 ], 00:27:36.931 "product_name": "NVMe disk", 00:27:36.931 "block_size": 512, 00:27:36.931 "num_blocks": 2097152, 00:27:36.931 "uuid": "4cb8bce7-68da-4740-bc59-f2ea4679b0be", 00:27:36.931 "assigned_rate_limits": { 00:27:36.931 "rw_ios_per_sec": 0, 00:27:36.931 "rw_mbytes_per_sec": 0, 00:27:36.931 "r_mbytes_per_sec": 0, 00:27:36.931 "w_mbytes_per_sec": 0 00:27:36.931 }, 00:27:36.931 "claimed": false, 00:27:36.931 "zoned": false, 00:27:36.931 "supported_io_types": { 00:27:36.931 "read": true, 00:27:36.931 "write": true, 00:27:36.931 "unmap": false, 00:27:36.931 "flush": true, 00:27:36.931 "reset": true, 00:27:36.931 "nvme_admin": true, 00:27:36.931 "nvme_io": true, 00:27:36.931 "nvme_io_md": false, 00:27:36.931 "write_zeroes": true, 00:27:36.931 "zcopy": false, 00:27:36.931 "get_zone_info": false, 00:27:36.931 "zone_management": false, 00:27:36.931 "zone_append": false, 00:27:36.931 "compare": true, 00:27:36.931 "compare_and_write": true, 00:27:36.931 "abort": true, 00:27:36.931 "seek_hole": false, 00:27:36.931 "seek_data": false, 00:27:36.931 "copy": true, 00:27:36.931 "nvme_iov_md": false 00:27:36.931 }, 00:27:36.931 "memory_domains": [ 00:27:36.931 { 00:27:36.931 "dma_device_id": "system", 00:27:36.931 "dma_device_type": 1 00:27:36.931 } 00:27:36.931 ], 00:27:36.931 "driver_specific": { 00:27:36.931 "nvme": [ 00:27:36.931 { 00:27:36.931 "trid": { 00:27:36.931 "trtype": "TCP", 00:27:36.931 "adrfam": "IPv4", 00:27:36.931 "traddr": "10.0.0.2", 00:27:36.931 "trsvcid": "4420", 00:27:36.931 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:36.931 }, 00:27:36.931 "ctrlr_data": { 00:27:36.931 "cntlid": 1, 00:27:36.931 "vendor_id": "0x8086", 00:27:36.931 "model_number": "SPDK bdev Controller", 00:27:36.931 "serial_number": "00000000000000000000", 00:27:36.931 "firmware_revision": "24.09", 00:27:36.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.931 "oacs": { 00:27:36.931 "security": 0, 00:27:36.931 "format": 0, 00:27:36.931 "firmware": 0, 00:27:36.931 "ns_manage": 0 00:27:36.931 }, 00:27:36.931 "multi_ctrlr": true, 00:27:36.931 "ana_reporting": false 00:27:36.931 }, 00:27:36.931 "vs": { 00:27:36.931 "nvme_version": "1.3" 00:27:36.931 }, 00:27:36.931 "ns_data": { 00:27:36.931 "id": 1, 00:27:36.931 "can_share": true 00:27:36.931 } 00:27:36.931 } 00:27:36.931 ], 00:27:36.931 "mp_policy": "active_passive" 00:27:36.931 } 00:27:36.931 } 00:27:36.931 ] 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.931 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.931 [2024-07-12 13:35:34.336862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.931 [2024-07-12 13:35:34.336957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc400b0 (9): Bad file descriptor 00:27:37.189 [2024-07-12 13:35:34.469438] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.189 [ 00:27:37.189 { 00:27:37.189 "name": "nvme0n1", 00:27:37.189 "aliases": [ 00:27:37.189 "4cb8bce7-68da-4740-bc59-f2ea4679b0be" 00:27:37.189 ], 00:27:37.189 "product_name": "NVMe disk", 00:27:37.189 "block_size": 512, 00:27:37.189 "num_blocks": 2097152, 00:27:37.189 "uuid": "4cb8bce7-68da-4740-bc59-f2ea4679b0be", 00:27:37.189 "assigned_rate_limits": { 00:27:37.189 "rw_ios_per_sec": 0, 00:27:37.189 "rw_mbytes_per_sec": 0, 00:27:37.189 "r_mbytes_per_sec": 0, 00:27:37.189 "w_mbytes_per_sec": 0 00:27:37.189 }, 00:27:37.189 "claimed": false, 00:27:37.189 "zoned": false, 00:27:37.189 "supported_io_types": { 00:27:37.189 "read": true, 00:27:37.189 "write": true, 00:27:37.189 "unmap": false, 00:27:37.189 "flush": true, 00:27:37.189 "reset": true, 00:27:37.189 "nvme_admin": true, 00:27:37.189 "nvme_io": true, 00:27:37.189 "nvme_io_md": false, 00:27:37.189 "write_zeroes": true, 00:27:37.189 "zcopy": false, 00:27:37.189 "get_zone_info": false, 00:27:37.189 "zone_management": false, 00:27:37.189 "zone_append": false, 00:27:37.189 "compare": true, 00:27:37.189 "compare_and_write": true, 00:27:37.189 "abort": true, 00:27:37.189 "seek_hole": false, 00:27:37.189 "seek_data": false, 00:27:37.189 "copy": true, 00:27:37.189 "nvme_iov_md": false 00:27:37.189 }, 00:27:37.189 "memory_domains": [ 00:27:37.189 { 00:27:37.189 "dma_device_id": "system", 00:27:37.189 "dma_device_type": 1 00:27:37.189 } 00:27:37.189 ], 00:27:37.189 "driver_specific": { 00:27:37.189 "nvme": [ 00:27:37.189 { 00:27:37.189 "trid": { 00:27:37.189 "trtype": "TCP", 00:27:37.189 "adrfam": "IPv4", 00:27:37.189 "traddr": "10.0.0.2", 00:27:37.189 "trsvcid": "4420", 00:27:37.189 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:37.189 }, 00:27:37.189 "ctrlr_data": { 00:27:37.189 "cntlid": 2, 00:27:37.189 "vendor_id": "0x8086", 00:27:37.189 "model_number": "SPDK bdev Controller", 00:27:37.189 "serial_number": "00000000000000000000", 00:27:37.189 "firmware_revision": "24.09", 00:27:37.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.189 "oacs": { 00:27:37.189 "security": 0, 00:27:37.189 "format": 0, 00:27:37.189 "firmware": 0, 00:27:37.189 "ns_manage": 0 00:27:37.189 }, 00:27:37.189 "multi_ctrlr": true, 00:27:37.189 "ana_reporting": false 00:27:37.189 }, 00:27:37.189 "vs": { 00:27:37.189 "nvme_version": "1.3" 00:27:37.189 }, 00:27:37.189 "ns_data": { 00:27:37.189 "id": 1, 00:27:37.189 "can_share": true 00:27:37.189 } 00:27:37.189 } 00:27:37.189 ], 00:27:37.189 "mp_policy": "active_passive" 00:27:37.189 } 00:27:37.189 } 00:27:37.189 ] 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Rhw2gAOTnC 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Rhw2gAOTnC 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.189 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.190 [2024-07-12 13:35:34.517466] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:37.190 [2024-07-12 13:35:34.517600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Rhw2gAOTnC 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.190 [2024-07-12 13:35:34.525472] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Rhw2gAOTnC 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.190 [2024-07-12 13:35:34.533499] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:37.190 [2024-07-12 13:35:34.533557] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:37.190 nvme0n1 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.190 [ 00:27:37.190 { 00:27:37.190 "name": "nvme0n1", 00:27:37.190 "aliases": [ 00:27:37.190 "4cb8bce7-68da-4740-bc59-f2ea4679b0be" 00:27:37.190 ], 00:27:37.190 "product_name": "NVMe disk", 00:27:37.190 "block_size": 512, 00:27:37.190 "num_blocks": 2097152, 00:27:37.190 "uuid": "4cb8bce7-68da-4740-bc59-f2ea4679b0be", 00:27:37.190 "assigned_rate_limits": { 00:27:37.190 "rw_ios_per_sec": 0, 00:27:37.190 "rw_mbytes_per_sec": 0, 00:27:37.190 "r_mbytes_per_sec": 0, 00:27:37.190 "w_mbytes_per_sec": 0 00:27:37.190 }, 00:27:37.190 "claimed": false, 00:27:37.190 "zoned": false, 00:27:37.190 "supported_io_types": { 00:27:37.190 "read": true, 00:27:37.190 "write": true, 00:27:37.190 "unmap": false, 00:27:37.190 "flush": true, 00:27:37.190 "reset": true, 00:27:37.190 "nvme_admin": true, 00:27:37.190 "nvme_io": true, 00:27:37.190 "nvme_io_md": false, 00:27:37.190 "write_zeroes": true, 00:27:37.190 "zcopy": false, 00:27:37.190 "get_zone_info": false, 00:27:37.190 "zone_management": false, 00:27:37.190 "zone_append": false, 00:27:37.190 "compare": true, 00:27:37.190 "compare_and_write": true, 00:27:37.190 "abort": true, 00:27:37.190 "seek_hole": false, 00:27:37.190 "seek_data": false, 00:27:37.190 "copy": true, 00:27:37.190 "nvme_iov_md": false 00:27:37.190 }, 00:27:37.190 "memory_domains": [ 00:27:37.190 { 00:27:37.190 "dma_device_id": "system", 00:27:37.190 "dma_device_type": 1 00:27:37.190 } 00:27:37.190 ], 00:27:37.190 "driver_specific": { 00:27:37.190 "nvme": [ 00:27:37.190 { 00:27:37.190 "trid": { 00:27:37.190 "trtype": "TCP", 00:27:37.190 "adrfam": "IPv4", 00:27:37.190 "traddr": "10.0.0.2", 00:27:37.190 "trsvcid": "4421", 00:27:37.190 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:37.190 }, 00:27:37.190 "ctrlr_data": { 00:27:37.190 "cntlid": 3, 00:27:37.190 "vendor_id": "0x8086", 00:27:37.190 "model_number": "SPDK bdev Controller", 00:27:37.190 "serial_number": "00000000000000000000", 00:27:37.190 "firmware_revision": "24.09", 00:27:37.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.190 "oacs": { 00:27:37.190 "security": 0, 00:27:37.190 "format": 0, 00:27:37.190 "firmware": 0, 00:27:37.190 "ns_manage": 0 00:27:37.190 }, 00:27:37.190 "multi_ctrlr": true, 00:27:37.190 "ana_reporting": false 00:27:37.190 }, 00:27:37.190 "vs": { 00:27:37.190 "nvme_version": "1.3" 00:27:37.190 }, 00:27:37.190 "ns_data": { 00:27:37.190 "id": 1, 00:27:37.190 "can_share": true 00:27:37.190 } 00:27:37.190 } 00:27:37.190 ], 00:27:37.190 "mp_policy": "active_passive" 00:27:37.190 } 00:27:37.190 } 00:27:37.190 ] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Rhw2gAOTnC 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.190 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.190 rmmod nvme_tcp 00:27:37.190 rmmod nvme_fabrics 00:27:37.448 rmmod nvme_keyring 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3667148 ']' 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3667148 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3667148 ']' 00:27:37.448 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3667148 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3667148 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3667148' 00:27:37.449 killing process with pid 3667148 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3667148 00:27:37.449 [2024-07-12 13:35:34.712766] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:37.449 [2024-07-12 13:35:34.712802] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3667148 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.449 13:35:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.982 13:35:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.982 00:27:39.982 real 0m5.454s 00:27:39.982 user 0m2.043s 00:27:39.982 sys 0m1.766s 00:27:39.982 13:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.982 13:35:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:39.982 ************************************ 00:27:39.982 END TEST nvmf_async_init 00:27:39.982 ************************************ 00:27:39.982 13:35:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:39.982 13:35:36 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:39.982 13:35:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:39.982 13:35:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.982 13:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.982 ************************************ 00:27:39.982 START TEST dma 00:27:39.982 ************************************ 00:27:39.982 13:35:37 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:39.982 * Looking for test storage... 00:27:39.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.982 13:35:37 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.982 13:35:37 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.982 13:35:37 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.982 13:35:37 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.982 13:35:37 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.982 13:35:37 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.982 13:35:37 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.982 13:35:37 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:39.982 13:35:37 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.982 13:35:37 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.982 13:35:37 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:39.982 13:35:37 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:39.982 00:27:39.982 real 0m0.075s 00:27:39.982 user 0m0.031s 00:27:39.982 sys 0m0.049s 00:27:39.982 13:35:37 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.982 13:35:37 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:39.982 ************************************ 00:27:39.982 END TEST dma 00:27:39.982 ************************************ 00:27:39.982 13:35:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:39.982 13:35:37 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:39.982 13:35:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:39.982 13:35:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.982 13:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.982 ************************************ 00:27:39.982 START TEST nvmf_identify 00:27:39.982 ************************************ 00:27:39.982 13:35:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:39.982 * Looking for test storage... 00:27:39.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.982 13:35:37 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.983 13:35:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:41.885 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:41.885 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:41.885 Found net devices under 0000:09:00.0: cvl_0_0 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.885 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:41.886 Found net devices under 0000:09:00.1: cvl_0_1 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:27:41.886 00:27:41.886 --- 10.0.0.2 ping statistics --- 00:27:41.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.886 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:27:41.886 00:27:41.886 --- 10.0.0.1 ping statistics --- 00:27:41.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.886 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3669170 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3669170 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3669170 ']' 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.886 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.886 [2024-07-12 13:35:39.352521] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:41.886 [2024-07-12 13:35:39.352602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.144 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.144 [2024-07-12 13:35:39.396128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:42.144 [2024-07-12 13:35:39.424984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.144 [2024-07-12 13:35:39.519487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.144 [2024-07-12 13:35:39.519540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.144 [2024-07-12 13:35:39.519569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.144 [2024-07-12 13:35:39.519581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.144 [2024-07-12 13:35:39.519590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.144 [2024-07-12 13:35:39.519647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.144 [2024-07-12 13:35:39.519737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.144 [2024-07-12 13:35:39.519740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.144 [2024-07-12 13:35:39.519718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 [2024-07-12 13:35:39.653165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 Malloc0 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 [2024-07-12 13:35:39.735362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.402 [ 00:27:42.402 { 00:27:42.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:42.402 "subtype": "Discovery", 00:27:42.402 "listen_addresses": [ 00:27:42.402 { 00:27:42.402 "trtype": "TCP", 00:27:42.402 "adrfam": "IPv4", 00:27:42.402 "traddr": "10.0.0.2", 00:27:42.402 "trsvcid": "4420" 00:27:42.402 } 00:27:42.402 ], 00:27:42.402 "allow_any_host": true, 00:27:42.402 "hosts": [] 00:27:42.402 }, 00:27:42.402 { 00:27:42.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.402 "subtype": "NVMe", 00:27:42.402 "listen_addresses": [ 00:27:42.402 { 00:27:42.402 "trtype": "TCP", 00:27:42.402 "adrfam": "IPv4", 00:27:42.402 "traddr": "10.0.0.2", 00:27:42.402 "trsvcid": "4420" 00:27:42.402 } 00:27:42.402 ], 00:27:42.402 "allow_any_host": true, 00:27:42.402 "hosts": [], 00:27:42.402 "serial_number": "SPDK00000000000001", 00:27:42.402 "model_number": "SPDK bdev Controller", 00:27:42.402 "max_namespaces": 32, 00:27:42.402 "min_cntlid": 1, 00:27:42.402 "max_cntlid": 65519, 00:27:42.402 "namespaces": [ 00:27:42.402 { 00:27:42.402 "nsid": 1, 00:27:42.402 "bdev_name": "Malloc0", 00:27:42.402 "name": "Malloc0", 00:27:42.402 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:42.402 "eui64": "ABCDEF0123456789", 00:27:42.402 "uuid": "121fe6dc-f2de-41fd-8134-8f735c82e926" 00:27:42.402 } 00:27:42.402 ] 00:27:42.402 } 00:27:42.402 ] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.402 13:35:39 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:42.402 [2024-07-12 13:35:39.777073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:42.402 [2024-07-12 13:35:39.777118] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669298 ] 00:27:42.402 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.402 [2024-07-12 13:35:39.795108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:42.402 [2024-07-12 13:35:39.812959] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:42.402 [2024-07-12 13:35:39.813016] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:42.402 [2024-07-12 13:35:39.813030] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:42.402 [2024-07-12 13:35:39.813046] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:42.402 [2024-07-12 13:35:39.813056] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:42.402 [2024-07-12 13:35:39.816380] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:42.402 [2024-07-12 13:35:39.816432] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc73630 0 00:27:42.402 [2024-07-12 13:35:39.824328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:42.402 [2024-07-12 13:35:39.824349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:42.402 [2024-07-12 13:35:39.824357] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:42.402 [2024-07-12 13:35:39.824363] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:42.402 [2024-07-12 13:35:39.824413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.402 [2024-07-12 13:35:39.824426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.402 [2024-07-12 13:35:39.824434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.402 [2024-07-12 13:35:39.824451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:42.402 [2024-07-12 13:35:39.824477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.402 [2024-07-12 13:35:39.831328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.402 [2024-07-12 13:35:39.831347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.831355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.831378] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:42.403 [2024-07-12 13:35:39.831389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:42.403 [2024-07-12 13:35:39.831398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:42.403 [2024-07-12 13:35:39.831420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.831446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.831470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.831645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.831661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.831668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.831684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:42.403 [2024-07-12 13:35:39.831697] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:42.403 [2024-07-12 13:35:39.831709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.831739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.831761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.831882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.831898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.831905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.831920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:42.403 [2024-07-12 13:35:39.831934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.831946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.831960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.831971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.831991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.832115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.832127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.832134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.832149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.832165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.832191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.832211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.832340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.832356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.832363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.832378] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:42.403 [2024-07-12 13:35:39.832387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.832400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.832509] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:42.403 [2024-07-12 13:35:39.832517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.832530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.832560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.832581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.832749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.832761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.832768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.832783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:42.403 [2024-07-12 13:35:39.832798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.832824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.832844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.832960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.403 [2024-07-12 13:35:39.832972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.403 [2024-07-12 13:35:39.832979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.832986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.403 [2024-07-12 13:35:39.832993] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:42.403 [2024-07-12 13:35:39.833002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:42.403 [2024-07-12 13:35:39.833015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:42.403 [2024-07-12 13:35:39.833033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:42.403 [2024-07-12 13:35:39.833049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.833056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.403 [2024-07-12 13:35:39.833067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.403 [2024-07-12 13:35:39.833087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.403 [2024-07-12 13:35:39.833308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.403 [2024-07-12 13:35:39.833336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.403 [2024-07-12 13:35:39.833345] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.833351] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc73630): datao=0, datal=4096, cccid=0 00:27:42.403 [2024-07-12 13:35:39.833359] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc1f80) on tqpair(0xc73630): expected_datao=0, payload_size=4096 00:27:42.403 [2024-07-12 13:35:39.833367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.833378] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.403 [2024-07-12 13:35:39.833390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.665 [2024-07-12 13:35:39.874353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.665 [2024-07-12 13:35:39.874362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.665 [2024-07-12 13:35:39.874382] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:42.665 [2024-07-12 13:35:39.874397] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:42.665 [2024-07-12 13:35:39.874406] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:42.665 [2024-07-12 13:35:39.874414] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:42.665 [2024-07-12 13:35:39.874422] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:42.665 [2024-07-12 13:35:39.874430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:42.665 [2024-07-12 13:35:39.874446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:42.665 [2024-07-12 13:35:39.874460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.665 [2024-07-12 13:35:39.874486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.665 [2024-07-12 13:35:39.874511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.665 [2024-07-12 13:35:39.874674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.665 [2024-07-12 13:35:39.874686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.665 [2024-07-12 13:35:39.874694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.665 [2024-07-12 13:35:39.874712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc73630) 00:27:42.665 [2024-07-12 13:35:39.874736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.665 [2024-07-12 13:35:39.874746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc73630) 00:27:42.665 [2024-07-12 13:35:39.874768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.665 [2024-07-12 13:35:39.874778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc73630) 00:27:42.665 [2024-07-12 13:35:39.874799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.665 [2024-07-12 13:35:39.874809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.665 [2024-07-12 13:35:39.874827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.665 [2024-07-12 13:35:39.874836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.665 [2024-07-12 13:35:39.874845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:42.665 [2024-07-12 13:35:39.874864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:42.665 [2024-07-12 13:35:39.874877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.874884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.874895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.666 [2024-07-12 13:35:39.874918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc1f80, cid 0, qid 0 00:27:42.666 [2024-07-12 13:35:39.874930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2100, cid 1, qid 0 00:27:42.666 [2024-07-12 13:35:39.874937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2280, cid 2, qid 0 00:27:42.666 [2024-07-12 13:35:39.874945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.666 [2024-07-12 13:35:39.874953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2580, cid 4, qid 0 00:27:42.666 [2024-07-12 13:35:39.875135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:39.875148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:39.875155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2580) on tqpair=0xc73630 00:27:42.666 [2024-07-12 13:35:39.875171] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:42.666 [2024-07-12 13:35:39.875180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:42.666 [2024-07-12 13:35:39.875196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.875217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.666 [2024-07-12 13:35:39.875238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2580, cid 4, qid 0 00:27:42.666 [2024-07-12 13:35:39.875388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.666 [2024-07-12 13:35:39.875405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.666 [2024-07-12 13:35:39.875412] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875418] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc73630): datao=0, datal=4096, cccid=4 00:27:42.666 [2024-07-12 13:35:39.875426] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc2580) on tqpair(0xc73630): expected_datao=0, payload_size=4096 00:27:42.666 [2024-07-12 13:35:39.875434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875444] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875452] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:39.875479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:39.875491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2580) on tqpair=0xc73630 00:27:42.666 [2024-07-12 13:35:39.875517] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:42.666 [2024-07-12 13:35:39.875554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.875575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.666 [2024-07-12 13:35:39.875586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.875609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.666 [2024-07-12 13:35:39.875635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2580, cid 4, qid 0 00:27:42.666 [2024-07-12 13:35:39.875647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2700, cid 5, qid 0 00:27:42.666 [2024-07-12 13:35:39.875807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.666 [2024-07-12 13:35:39.875820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.666 [2024-07-12 13:35:39.875827] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875833] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc73630): datao=0, datal=1024, cccid=4 00:27:42.666 [2024-07-12 13:35:39.875840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc2580) on tqpair(0xc73630): expected_datao=0, payload_size=1024 00:27:42.666 [2024-07-12 13:35:39.875848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875857] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:39.875883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:39.875889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.875896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2700) on tqpair=0xc73630 00:27:42.666 [2024-07-12 13:35:39.920327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:39.920348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:39.920356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.920364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2580) on tqpair=0xc73630 00:27:42.666 [2024-07-12 13:35:39.920382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.920392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.920404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.666 [2024-07-12 13:35:39.920435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2580, cid 4, qid 0 00:27:42.666 [2024-07-12 13:35:39.920612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.666 [2024-07-12 13:35:39.920628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.666 [2024-07-12 13:35:39.920636] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.920642] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc73630): datao=0, datal=3072, cccid=4 00:27:42.666 [2024-07-12 13:35:39.920657] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc2580) on tqpair(0xc73630): expected_datao=0, payload_size=3072 00:27:42.666 [2024-07-12 13:35:39.920665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.920686] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.920696] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:39.961477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:39.961485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2580) on tqpair=0xc73630 00:27:42.666 [2024-07-12 13:35:39.961507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc73630) 00:27:42.666 [2024-07-12 13:35:39.961528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.666 [2024-07-12 13:35:39.961557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2580, cid 4, qid 0 00:27:42.666 [2024-07-12 13:35:39.961690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.666 [2024-07-12 13:35:39.961703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.666 [2024-07-12 13:35:39.961711] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961717] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc73630): datao=0, datal=8, cccid=4 00:27:42.666 [2024-07-12 13:35:39.961725] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc2580) on tqpair(0xc73630): expected_datao=0, payload_size=8 00:27:42.666 [2024-07-12 13:35:39.961733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961743] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:39.961750] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:40.006338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.666 [2024-07-12 13:35:40.006358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.666 [2024-07-12 13:35:40.006366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.666 [2024-07-12 13:35:40.006374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2580) on tqpair=0xc73630 00:27:42.666 ===================================================== 00:27:42.666 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:42.666 ===================================================== 00:27:42.666 Controller Capabilities/Features 00:27:42.666 ================================ 00:27:42.666 Vendor ID: 0000 00:27:42.666 Subsystem Vendor ID: 0000 00:27:42.666 Serial Number: .................... 00:27:42.666 Model Number: ........................................ 00:27:42.666 Firmware Version: 24.09 00:27:42.666 Recommended Arb Burst: 0 00:27:42.666 IEEE OUI Identifier: 00 00 00 00:27:42.666 Multi-path I/O 00:27:42.666 May have multiple subsystem ports: No 00:27:42.666 May have multiple controllers: No 00:27:42.666 Associated with SR-IOV VF: No 00:27:42.666 Max Data Transfer Size: 131072 00:27:42.666 Max Number of Namespaces: 0 00:27:42.666 Max Number of I/O Queues: 1024 00:27:42.666 NVMe Specification Version (VS): 1.3 00:27:42.666 NVMe Specification Version (Identify): 1.3 00:27:42.666 Maximum Queue Entries: 128 00:27:42.666 Contiguous Queues Required: Yes 00:27:42.666 Arbitration Mechanisms Supported 00:27:42.666 Weighted Round Robin: Not Supported 00:27:42.666 Vendor Specific: Not Supported 00:27:42.666 Reset Timeout: 15000 ms 00:27:42.666 Doorbell Stride: 4 bytes 00:27:42.666 NVM Subsystem Reset: Not Supported 00:27:42.666 Command Sets Supported 00:27:42.666 NVM Command Set: Supported 00:27:42.666 Boot Partition: Not Supported 00:27:42.666 Memory Page Size Minimum: 4096 bytes 00:27:42.666 Memory Page Size Maximum: 4096 bytes 00:27:42.666 Persistent Memory Region: Not Supported 00:27:42.666 Optional Asynchronous Events Supported 00:27:42.666 Namespace Attribute Notices: Not Supported 00:27:42.666 Firmware Activation Notices: Not Supported 00:27:42.666 ANA Change Notices: Not Supported 00:27:42.666 PLE Aggregate Log Change Notices: Not Supported 00:27:42.666 LBA Status Info Alert Notices: Not Supported 00:27:42.666 EGE Aggregate Log Change Notices: Not Supported 00:27:42.666 Normal NVM Subsystem Shutdown event: Not Supported 00:27:42.666 Zone Descriptor Change Notices: Not Supported 00:27:42.666 Discovery Log Change Notices: Supported 00:27:42.667 Controller Attributes 00:27:42.667 128-bit Host Identifier: Not Supported 00:27:42.667 Non-Operational Permissive Mode: Not Supported 00:27:42.667 NVM Sets: Not Supported 00:27:42.667 Read Recovery Levels: Not Supported 00:27:42.667 Endurance Groups: Not Supported 00:27:42.667 Predictable Latency Mode: Not Supported 00:27:42.667 Traffic Based Keep ALive: Not Supported 00:27:42.667 Namespace Granularity: Not Supported 00:27:42.667 SQ Associations: Not Supported 00:27:42.667 UUID List: Not Supported 00:27:42.667 Multi-Domain Subsystem: Not Supported 00:27:42.667 Fixed Capacity Management: Not Supported 00:27:42.667 Variable Capacity Management: Not Supported 00:27:42.667 Delete Endurance Group: Not Supported 00:27:42.667 Delete NVM Set: Not Supported 00:27:42.667 Extended LBA Formats Supported: Not Supported 00:27:42.667 Flexible Data Placement Supported: Not Supported 00:27:42.667 00:27:42.667 Controller Memory Buffer Support 00:27:42.667 ================================ 00:27:42.667 Supported: No 00:27:42.667 00:27:42.667 Persistent Memory Region Support 00:27:42.667 ================================ 00:27:42.667 Supported: No 00:27:42.667 00:27:42.667 Admin Command Set Attributes 00:27:42.667 ============================ 00:27:42.667 Security Send/Receive: Not Supported 00:27:42.667 Format NVM: Not Supported 00:27:42.667 Firmware Activate/Download: Not Supported 00:27:42.667 Namespace Management: Not Supported 00:27:42.667 Device Self-Test: Not Supported 00:27:42.667 Directives: Not Supported 00:27:42.667 NVMe-MI: Not Supported 00:27:42.667 Virtualization Management: Not Supported 00:27:42.667 Doorbell Buffer Config: Not Supported 00:27:42.667 Get LBA Status Capability: Not Supported 00:27:42.667 Command & Feature Lockdown Capability: Not Supported 00:27:42.667 Abort Command Limit: 1 00:27:42.667 Async Event Request Limit: 4 00:27:42.667 Number of Firmware Slots: N/A 00:27:42.667 Firmware Slot 1 Read-Only: N/A 00:27:42.667 Firmware Activation Without Reset: N/A 00:27:42.667 Multiple Update Detection Support: N/A 00:27:42.667 Firmware Update Granularity: No Information Provided 00:27:42.667 Per-Namespace SMART Log: No 00:27:42.667 Asymmetric Namespace Access Log Page: Not Supported 00:27:42.667 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:42.667 Command Effects Log Page: Not Supported 00:27:42.667 Get Log Page Extended Data: Supported 00:27:42.667 Telemetry Log Pages: Not Supported 00:27:42.667 Persistent Event Log Pages: Not Supported 00:27:42.667 Supported Log Pages Log Page: May Support 00:27:42.667 Commands Supported & Effects Log Page: Not Supported 00:27:42.667 Feature Identifiers & Effects Log Page:May Support 00:27:42.667 NVMe-MI Commands & Effects Log Page: May Support 00:27:42.667 Data Area 4 for Telemetry Log: Not Supported 00:27:42.667 Error Log Page Entries Supported: 128 00:27:42.667 Keep Alive: Not Supported 00:27:42.667 00:27:42.667 NVM Command Set Attributes 00:27:42.667 ========================== 00:27:42.667 Submission Queue Entry Size 00:27:42.667 Max: 1 00:27:42.667 Min: 1 00:27:42.667 Completion Queue Entry Size 00:27:42.667 Max: 1 00:27:42.667 Min: 1 00:27:42.667 Number of Namespaces: 0 00:27:42.667 Compare Command: Not Supported 00:27:42.667 Write Uncorrectable Command: Not Supported 00:27:42.667 Dataset Management Command: Not Supported 00:27:42.667 Write Zeroes Command: Not Supported 00:27:42.667 Set Features Save Field: Not Supported 00:27:42.667 Reservations: Not Supported 00:27:42.667 Timestamp: Not Supported 00:27:42.667 Copy: Not Supported 00:27:42.667 Volatile Write Cache: Not Present 00:27:42.667 Atomic Write Unit (Normal): 1 00:27:42.667 Atomic Write Unit (PFail): 1 00:27:42.667 Atomic Compare & Write Unit: 1 00:27:42.667 Fused Compare & Write: Supported 00:27:42.667 Scatter-Gather List 00:27:42.667 SGL Command Set: Supported 00:27:42.667 SGL Keyed: Supported 00:27:42.667 SGL Bit Bucket Descriptor: Not Supported 00:27:42.667 SGL Metadata Pointer: Not Supported 00:27:42.667 Oversized SGL: Not Supported 00:27:42.667 SGL Metadata Address: Not Supported 00:27:42.667 SGL Offset: Supported 00:27:42.667 Transport SGL Data Block: Not Supported 00:27:42.667 Replay Protected Memory Block: Not Supported 00:27:42.667 00:27:42.667 Firmware Slot Information 00:27:42.667 ========================= 00:27:42.667 Active slot: 0 00:27:42.667 00:27:42.667 00:27:42.667 Error Log 00:27:42.667 ========= 00:27:42.667 00:27:42.667 Active Namespaces 00:27:42.667 ================= 00:27:42.667 Discovery Log Page 00:27:42.667 ================== 00:27:42.667 Generation Counter: 2 00:27:42.667 Number of Records: 2 00:27:42.667 Record Format: 0 00:27:42.667 00:27:42.667 Discovery Log Entry 0 00:27:42.667 ---------------------- 00:27:42.667 Transport Type: 3 (TCP) 00:27:42.667 Address Family: 1 (IPv4) 00:27:42.667 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:42.667 Entry Flags: 00:27:42.667 Duplicate Returned Information: 1 00:27:42.667 Explicit Persistent Connection Support for Discovery: 1 00:27:42.667 Transport Requirements: 00:27:42.667 Secure Channel: Not Required 00:27:42.667 Port ID: 0 (0x0000) 00:27:42.667 Controller ID: 65535 (0xffff) 00:27:42.667 Admin Max SQ Size: 128 00:27:42.667 Transport Service Identifier: 4420 00:27:42.667 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:42.667 Transport Address: 10.0.0.2 00:27:42.667 Discovery Log Entry 1 00:27:42.667 ---------------------- 00:27:42.667 Transport Type: 3 (TCP) 00:27:42.667 Address Family: 1 (IPv4) 00:27:42.667 Subsystem Type: 2 (NVM Subsystem) 00:27:42.667 Entry Flags: 00:27:42.667 Duplicate Returned Information: 0 00:27:42.667 Explicit Persistent Connection Support for Discovery: 0 00:27:42.667 Transport Requirements: 00:27:42.667 Secure Channel: Not Required 00:27:42.667 Port ID: 0 (0x0000) 00:27:42.667 Controller ID: 65535 (0xffff) 00:27:42.667 Admin Max SQ Size: 128 00:27:42.667 Transport Service Identifier: 4420 00:27:42.667 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:42.667 Transport Address: 10.0.0.2 [2024-07-12 13:35:40.006488] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:42.667 [2024-07-12 13:35:40.006509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc1f80) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.006521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.667 [2024-07-12 13:35:40.006531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2100) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.006539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.667 [2024-07-12 13:35:40.006547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2280) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.006555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.667 [2024-07-12 13:35:40.006564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.667 [2024-07-12 13:35:40.006589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.006599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.006608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.667 [2024-07-12 13:35:40.006621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.667 [2024-07-12 13:35:40.006645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.667 [2024-07-12 13:35:40.006796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.667 [2024-07-12 13:35:40.006809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.667 [2024-07-12 13:35:40.006816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.006824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.006835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.006844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.006850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.667 [2024-07-12 13:35:40.006861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.667 [2024-07-12 13:35:40.006887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.667 [2024-07-12 13:35:40.007033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.667 [2024-07-12 13:35:40.007049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.667 [2024-07-12 13:35:40.007056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.007063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.667 [2024-07-12 13:35:40.007071] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:42.667 [2024-07-12 13:35:40.007080] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:42.667 [2024-07-12 13:35:40.007096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.007105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.667 [2024-07-12 13:35:40.007112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.667 [2024-07-12 13:35:40.007123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.667 [2024-07-12 13:35:40.007144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.667 [2024-07-12 13:35:40.007256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.007269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.007276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.007300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.007337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.007358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.007476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.007488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.007496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.007523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.007551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.007572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.007688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.007700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.007708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.007731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.007758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.007778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.007889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.007902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.007909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.007931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.007947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.007958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.007978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.008090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.008103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.008110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.008132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.008159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.008179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.008298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.008313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.008334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.008358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.008390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.008411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.008535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.008551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.008558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.008582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.008609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.008630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.008749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.008764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.008772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.008796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.008824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.008845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.008960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.008973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.008981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.008988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.009004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.009031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.009051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.009166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.009178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.009185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.009208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.009238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.009259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.009384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.009400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.009408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.009432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.009460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.009481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.009600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.009615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.009622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.009646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.668 [2024-07-12 13:35:40.009673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.668 [2024-07-12 13:35:40.009694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.668 [2024-07-12 13:35:40.009803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.668 [2024-07-12 13:35:40.009815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.668 [2024-07-12 13:35:40.009823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.668 [2024-07-12 13:35:40.009846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.668 [2024-07-12 13:35:40.009862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.669 [2024-07-12 13:35:40.009873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.009894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.669 [2024-07-12 13:35:40.010008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.010023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.010031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.669 [2024-07-12 13:35:40.010055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.669 [2024-07-12 13:35:40.010081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.010108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.669 [2024-07-12 13:35:40.010233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.010249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.010257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.669 [2024-07-12 13:35:40.010280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.010297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc73630) 00:27:42.669 [2024-07-12 13:35:40.010308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.014341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc2400, cid 3, qid 0 00:27:42.669 [2024-07-12 13:35:40.014497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.014510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.014518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.014525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc2400) on tqpair=0xc73630 00:27:42.669 [2024-07-12 13:35:40.014538] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:42.669 00:27:42.669 13:35:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:42.669 [2024-07-12 13:35:40.050869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:42.669 [2024-07-12 13:35:40.050920] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669306 ] 00:27:42.669 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.669 [2024-07-12 13:35:40.068038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:42.669 [2024-07-12 13:35:40.085904] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:42.669 [2024-07-12 13:35:40.085969] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:42.669 [2024-07-12 13:35:40.085978] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:42.669 [2024-07-12 13:35:40.085995] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:42.669 [2024-07-12 13:35:40.086007] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:42.669 [2024-07-12 13:35:40.089377] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:42.669 [2024-07-12 13:35:40.089440] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd3f630 0 00:27:42.669 [2024-07-12 13:35:40.096336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:42.669 [2024-07-12 13:35:40.096358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:42.669 [2024-07-12 13:35:40.096366] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:42.669 [2024-07-12 13:35:40.096378] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:42.669 [2024-07-12 13:35:40.096431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.096444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.096452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.669 [2024-07-12 13:35:40.096469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:42.669 [2024-07-12 13:35:40.096495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.669 [2024-07-12 13:35:40.103329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.103348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.103356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.669 [2024-07-12 13:35:40.103386] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:42.669 [2024-07-12 13:35:40.103398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:42.669 [2024-07-12 13:35:40.103408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:42.669 [2024-07-12 13:35:40.103431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.669 [2024-07-12 13:35:40.103459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.103482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.669 [2024-07-12 13:35:40.103707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.103723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.103730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.669 [2024-07-12 13:35:40.103745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:42.669 [2024-07-12 13:35:40.103761] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:42.669 [2024-07-12 13:35:40.103773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.103787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.669 [2024-07-12 13:35:40.103798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.103834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.669 [2024-07-12 13:35:40.104056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.104072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.104079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.669 [2024-07-12 13:35:40.104094] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:42.669 [2024-07-12 13:35:40.104111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:42.669 [2024-07-12 13:35:40.104128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.669 [2024-07-12 13:35:40.104153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.669 [2024-07-12 13:35:40.104190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.669 [2024-07-12 13:35:40.104398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.669 [2024-07-12 13:35:40.104416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.669 [2024-07-12 13:35:40.104424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.669 [2024-07-12 13:35:40.104440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:42.669 [2024-07-12 13:35:40.104457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.669 [2024-07-12 13:35:40.104475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.669 [2024-07-12 13:35:40.104486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.670 [2024-07-12 13:35:40.104507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.670 [2024-07-12 13:35:40.104682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.670 [2024-07-12 13:35:40.104697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.670 [2024-07-12 13:35:40.104704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.104711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.670 [2024-07-12 13:35:40.104719] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:42.670 [2024-07-12 13:35:40.104727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:42.670 [2024-07-12 13:35:40.104742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:42.670 [2024-07-12 13:35:40.104853] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:42.670 [2024-07-12 13:35:40.104860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:42.670 [2024-07-12 13:35:40.104873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.104881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.104887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.670 [2024-07-12 13:35:40.104897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.670 [2024-07-12 13:35:40.104917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.670 [2024-07-12 13:35:40.105105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.670 [2024-07-12 13:35:40.105121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.670 [2024-07-12 13:35:40.105128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.105134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.670 [2024-07-12 13:35:40.105143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:42.670 [2024-07-12 13:35:40.105166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.105176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.105182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.670 [2024-07-12 13:35:40.105193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.670 [2024-07-12 13:35:40.105214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.670 [2024-07-12 13:35:40.105386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.670 [2024-07-12 13:35:40.105402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.670 [2024-07-12 13:35:40.105409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.670 [2024-07-12 13:35:40.105416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.670 [2024-07-12 13:35:40.105424] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:42.671 [2024-07-12 13:35:40.105435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.105449] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:42.671 [2024-07-12 13:35:40.105463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.105478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.105500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.671 [2024-07-12 13:35:40.105521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.671 [2024-07-12 13:35:40.105733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.671 [2024-07-12 13:35:40.105806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.671 [2024-07-12 13:35:40.105815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105822] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=4096, cccid=0 00:27:42.671 [2024-07-12 13:35:40.105830] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8df80) on tqpair(0xd3f630): expected_datao=0, payload_size=4096 00:27:42.671 [2024-07-12 13:35:40.105893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105912] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105921] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.671 [2024-07-12 13:35:40.105943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.671 [2024-07-12 13:35:40.105950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.105956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.671 [2024-07-12 13:35:40.105967] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:42.671 [2024-07-12 13:35:40.105981] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:42.671 [2024-07-12 13:35:40.105989] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:42.671 [2024-07-12 13:35:40.105996] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:42.671 [2024-07-12 13:35:40.106005] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:42.671 [2024-07-12 13:35:40.106016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.671 [2024-07-12 13:35:40.106106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.671 [2024-07-12 13:35:40.106298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.671 [2024-07-12 13:35:40.106322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.671 [2024-07-12 13:35:40.106334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.671 [2024-07-12 13:35:40.106353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.671 [2024-07-12 13:35:40.106386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.671 [2024-07-12 13:35:40.106417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.671 [2024-07-12 13:35:40.106448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.671 [2024-07-12 13:35:40.106478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.671 [2024-07-12 13:35:40.106552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8df80, cid 0, qid 0 00:27:42.671 [2024-07-12 13:35:40.106567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e100, cid 1, qid 0 00:27:42.671 [2024-07-12 13:35:40.106576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e280, cid 2, qid 0 00:27:42.671 [2024-07-12 13:35:40.106584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.671 [2024-07-12 13:35:40.106591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.671 [2024-07-12 13:35:40.106813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.671 [2024-07-12 13:35:40.106828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.671 [2024-07-12 13:35:40.106835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.671 [2024-07-12 13:35:40.106850] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:42.671 [2024-07-12 13:35:40.106874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.106911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.106924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.106934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.671 [2024-07-12 13:35:40.106955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.671 [2024-07-12 13:35:40.107141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.671 [2024-07-12 13:35:40.107156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.671 [2024-07-12 13:35:40.107164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.107171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.671 [2024-07-12 13:35:40.107235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.107256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:42.671 [2024-07-12 13:35:40.107284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.107292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.671 [2024-07-12 13:35:40.107302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.671 [2024-07-12 13:35:40.111332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.671 [2024-07-12 13:35:40.111555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.671 [2024-07-12 13:35:40.111572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.671 [2024-07-12 13:35:40.111579] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.111586] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=4096, cccid=4 00:27:42.671 [2024-07-12 13:35:40.111593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e580) on tqpair(0xd3f630): expected_datao=0, payload_size=4096 00:27:42.671 [2024-07-12 13:35:40.111605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.111623] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.671 [2024-07-12 13:35:40.111632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.155355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.155363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.155385] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:42.932 [2024-07-12 13:35:40.155406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.155427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.155442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.155461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.155484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.932 [2024-07-12 13:35:40.155682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.932 [2024-07-12 13:35:40.155698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.932 [2024-07-12 13:35:40.155705] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155711] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=4096, cccid=4 00:27:42.932 [2024-07-12 13:35:40.155719] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e580) on tqpair(0xd3f630): expected_datao=0, payload_size=4096 00:27:42.932 [2024-07-12 13:35:40.155727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155801] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.155824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.155831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.155858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.155879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.155893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.155901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.155912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.155933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.932 [2024-07-12 13:35:40.156073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.932 [2024-07-12 13:35:40.156089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.932 [2024-07-12 13:35:40.156095] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.156161] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=4096, cccid=4 00:27:42.932 [2024-07-12 13:35:40.156173] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e580) on tqpair(0xd3f630): expected_datao=0, payload_size=4096 00:27:42.932 [2024-07-12 13:35:40.156180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.156199] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.156207] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.196563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.196571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.196591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196661] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:42.932 [2024-07-12 13:35:40.196669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:42.932 [2024-07-12 13:35:40.196677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:42.932 [2024-07-12 13:35:40.196698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.196718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.196730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.196752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.932 [2024-07-12 13:35:40.196779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.932 [2024-07-12 13:35:40.196791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e700, cid 5, qid 0 00:27:42.932 [2024-07-12 13:35:40.196970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.196985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.196992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.196999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.197011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.197021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.197032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e700) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.197072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.197092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.197113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e700, cid 5, qid 0 00:27:42.932 [2024-07-12 13:35:40.197272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.197288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.197298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e700) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.197329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.197353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.197375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e700, cid 5, qid 0 00:27:42.932 [2024-07-12 13:35:40.197503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.197518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.197525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e700) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.197550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.197569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.932 [2024-07-12 13:35:40.197590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e700, cid 5, qid 0 00:27:42.932 [2024-07-12 13:35:40.197712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.932 [2024-07-12 13:35:40.197727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.932 [2024-07-12 13:35:40.197734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e700) on tqpair=0xd3f630 00:27:42.932 [2024-07-12 13:35:40.197768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.932 [2024-07-12 13:35:40.197779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f630) 00:27:42.932 [2024-07-12 13:35:40.197790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.933 [2024-07-12 13:35:40.197802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.197809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f630) 00:27:42.933 [2024-07-12 13:35:40.197819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.933 [2024-07-12 13:35:40.197829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.197837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd3f630) 00:27:42.933 [2024-07-12 13:35:40.197850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.933 [2024-07-12 13:35:40.197878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.197885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd3f630) 00:27:42.933 [2024-07-12 13:35:40.197895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.933 [2024-07-12 13:35:40.197916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e700, cid 5, qid 0 00:27:42.933 [2024-07-12 13:35:40.197927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e580, cid 4, qid 0 00:27:42.933 [2024-07-12 13:35:40.197949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e880, cid 6, qid 0 00:27:42.933 [2024-07-12 13:35:40.197957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8ea00, cid 7, qid 0 00:27:42.933 [2024-07-12 13:35:40.198203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.933 [2024-07-12 13:35:40.198218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.933 [2024-07-12 13:35:40.198344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=8192, cccid=5 00:27:42.933 [2024-07-12 13:35:40.198360] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e700) on tqpair(0xd3f630): expected_datao=0, payload_size=8192 00:27:42.933 [2024-07-12 13:35:40.198367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198437] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198450] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.933 [2024-07-12 13:35:40.198473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.933 [2024-07-12 13:35:40.198480] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198486] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=512, cccid=4 00:27:42.933 [2024-07-12 13:35:40.198494] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e580) on tqpair(0xd3f630): expected_datao=0, payload_size=512 00:27:42.933 [2024-07-12 13:35:40.198501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198510] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198517] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.933 [2024-07-12 13:35:40.198534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.933 [2024-07-12 13:35:40.198541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198547] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=512, cccid=6 00:27:42.933 [2024-07-12 13:35:40.198554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8e880) on tqpair(0xd3f630): expected_datao=0, payload_size=512 00:27:42.933 [2024-07-12 13:35:40.198562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198571] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198577] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.933 [2024-07-12 13:35:40.198594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.933 [2024-07-12 13:35:40.198601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198607] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f630): datao=0, datal=4096, cccid=7 00:27:42.933 [2024-07-12 13:35:40.198634] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8ea00) on tqpair(0xd3f630): expected_datao=0, payload_size=4096 00:27:42.933 [2024-07-12 13:35:40.198642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198652] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198659] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.933 [2024-07-12 13:35:40.198690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.933 [2024-07-12 13:35:40.198697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e700) on tqpair=0xd3f630 00:27:42.933 [2024-07-12 13:35:40.198722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.933 [2024-07-12 13:35:40.198733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.933 [2024-07-12 13:35:40.198739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e580) on tqpair=0xd3f630 00:27:42.933 [2024-07-12 13:35:40.198760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.933 [2024-07-12 13:35:40.198770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.933 [2024-07-12 13:35:40.198776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e880) on tqpair=0xd3f630 00:27:42.933 [2024-07-12 13:35:40.198792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.933 [2024-07-12 13:35:40.198801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.933 [2024-07-12 13:35:40.198807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.933 [2024-07-12 13:35:40.198813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8ea00) on tqpair=0xd3f630 00:27:42.933 ===================================================== 00:27:42.933 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.933 ===================================================== 00:27:42.933 Controller Capabilities/Features 00:27:42.933 ================================ 00:27:42.933 Vendor ID: 8086 00:27:42.933 Subsystem Vendor ID: 8086 00:27:42.933 Serial Number: SPDK00000000000001 00:27:42.933 Model Number: SPDK bdev Controller 00:27:42.933 Firmware Version: 24.09 00:27:42.933 Recommended Arb Burst: 6 00:27:42.933 IEEE OUI Identifier: e4 d2 5c 00:27:42.933 Multi-path I/O 00:27:42.933 May have multiple subsystem ports: Yes 00:27:42.933 May have multiple controllers: Yes 00:27:42.933 Associated with SR-IOV VF: No 00:27:42.933 Max Data Transfer Size: 131072 00:27:42.933 Max Number of Namespaces: 32 00:27:42.933 Max Number of I/O Queues: 127 00:27:42.933 NVMe Specification Version (VS): 1.3 00:27:42.933 NVMe Specification Version (Identify): 1.3 00:27:42.933 Maximum Queue Entries: 128 00:27:42.933 Contiguous Queues Required: Yes 00:27:42.933 Arbitration Mechanisms Supported 00:27:42.933 Weighted Round Robin: Not Supported 00:27:42.933 Vendor Specific: Not Supported 00:27:42.933 Reset Timeout: 15000 ms 00:27:42.933 Doorbell Stride: 4 bytes 00:27:42.933 NVM Subsystem Reset: Not Supported 00:27:42.933 Command Sets Supported 00:27:42.933 NVM Command Set: Supported 00:27:42.933 Boot Partition: Not Supported 00:27:42.933 Memory Page Size Minimum: 4096 bytes 00:27:42.933 Memory Page Size Maximum: 4096 bytes 00:27:42.933 Persistent Memory Region: Not Supported 00:27:42.933 Optional Asynchronous Events Supported 00:27:42.933 Namespace Attribute Notices: Supported 00:27:42.933 Firmware Activation Notices: Not Supported 00:27:42.933 ANA Change Notices: Not Supported 00:27:42.933 PLE Aggregate Log Change Notices: Not Supported 00:27:42.933 LBA Status Info Alert Notices: Not Supported 00:27:42.933 EGE Aggregate Log Change Notices: Not Supported 00:27:42.933 Normal NVM Subsystem Shutdown event: Not Supported 00:27:42.933 Zone Descriptor Change Notices: Not Supported 00:27:42.933 Discovery Log Change Notices: Not Supported 00:27:42.933 Controller Attributes 00:27:42.933 128-bit Host Identifier: Supported 00:27:42.933 Non-Operational Permissive Mode: Not Supported 00:27:42.933 NVM Sets: Not Supported 00:27:42.933 Read Recovery Levels: Not Supported 00:27:42.933 Endurance Groups: Not Supported 00:27:42.933 Predictable Latency Mode: Not Supported 00:27:42.933 Traffic Based Keep ALive: Not Supported 00:27:42.933 Namespace Granularity: Not Supported 00:27:42.933 SQ Associations: Not Supported 00:27:42.933 UUID List: Not Supported 00:27:42.933 Multi-Domain Subsystem: Not Supported 00:27:42.933 Fixed Capacity Management: Not Supported 00:27:42.933 Variable Capacity Management: Not Supported 00:27:42.933 Delete Endurance Group: Not Supported 00:27:42.933 Delete NVM Set: Not Supported 00:27:42.933 Extended LBA Formats Supported: Not Supported 00:27:42.933 Flexible Data Placement Supported: Not Supported 00:27:42.933 00:27:42.933 Controller Memory Buffer Support 00:27:42.933 ================================ 00:27:42.933 Supported: No 00:27:42.933 00:27:42.933 Persistent Memory Region Support 00:27:42.933 ================================ 00:27:42.933 Supported: No 00:27:42.933 00:27:42.933 Admin Command Set Attributes 00:27:42.933 ============================ 00:27:42.933 Security Send/Receive: Not Supported 00:27:42.933 Format NVM: Not Supported 00:27:42.933 Firmware Activate/Download: Not Supported 00:27:42.933 Namespace Management: Not Supported 00:27:42.933 Device Self-Test: Not Supported 00:27:42.933 Directives: Not Supported 00:27:42.933 NVMe-MI: Not Supported 00:27:42.933 Virtualization Management: Not Supported 00:27:42.933 Doorbell Buffer Config: Not Supported 00:27:42.933 Get LBA Status Capability: Not Supported 00:27:42.933 Command & Feature Lockdown Capability: Not Supported 00:27:42.933 Abort Command Limit: 4 00:27:42.934 Async Event Request Limit: 4 00:27:42.934 Number of Firmware Slots: N/A 00:27:42.934 Firmware Slot 1 Read-Only: N/A 00:27:42.934 Firmware Activation Without Reset: N/A 00:27:42.934 Multiple Update Detection Support: N/A 00:27:42.934 Firmware Update Granularity: No Information Provided 00:27:42.934 Per-Namespace SMART Log: No 00:27:42.934 Asymmetric Namespace Access Log Page: Not Supported 00:27:42.934 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:42.934 Command Effects Log Page: Supported 00:27:42.934 Get Log Page Extended Data: Supported 00:27:42.934 Telemetry Log Pages: Not Supported 00:27:42.934 Persistent Event Log Pages: Not Supported 00:27:42.934 Supported Log Pages Log Page: May Support 00:27:42.934 Commands Supported & Effects Log Page: Not Supported 00:27:42.934 Feature Identifiers & Effects Log Page:May Support 00:27:42.934 NVMe-MI Commands & Effects Log Page: May Support 00:27:42.934 Data Area 4 for Telemetry Log: Not Supported 00:27:42.934 Error Log Page Entries Supported: 128 00:27:42.934 Keep Alive: Supported 00:27:42.934 Keep Alive Granularity: 10000 ms 00:27:42.934 00:27:42.934 NVM Command Set Attributes 00:27:42.934 ========================== 00:27:42.934 Submission Queue Entry Size 00:27:42.934 Max: 64 00:27:42.934 Min: 64 00:27:42.934 Completion Queue Entry Size 00:27:42.934 Max: 16 00:27:42.934 Min: 16 00:27:42.934 Number of Namespaces: 32 00:27:42.934 Compare Command: Supported 00:27:42.934 Write Uncorrectable Command: Not Supported 00:27:42.934 Dataset Management Command: Supported 00:27:42.934 Write Zeroes Command: Supported 00:27:42.934 Set Features Save Field: Not Supported 00:27:42.934 Reservations: Supported 00:27:42.934 Timestamp: Not Supported 00:27:42.934 Copy: Supported 00:27:42.934 Volatile Write Cache: Present 00:27:42.934 Atomic Write Unit (Normal): 1 00:27:42.934 Atomic Write Unit (PFail): 1 00:27:42.934 Atomic Compare & Write Unit: 1 00:27:42.934 Fused Compare & Write: Supported 00:27:42.934 Scatter-Gather List 00:27:42.934 SGL Command Set: Supported 00:27:42.934 SGL Keyed: Supported 00:27:42.934 SGL Bit Bucket Descriptor: Not Supported 00:27:42.934 SGL Metadata Pointer: Not Supported 00:27:42.934 Oversized SGL: Not Supported 00:27:42.934 SGL Metadata Address: Not Supported 00:27:42.934 SGL Offset: Supported 00:27:42.934 Transport SGL Data Block: Not Supported 00:27:42.934 Replay Protected Memory Block: Not Supported 00:27:42.934 00:27:42.934 Firmware Slot Information 00:27:42.934 ========================= 00:27:42.934 Active slot: 1 00:27:42.934 Slot 1 Firmware Revision: 24.09 00:27:42.934 00:27:42.934 00:27:42.934 Commands Supported and Effects 00:27:42.934 ============================== 00:27:42.934 Admin Commands 00:27:42.934 -------------- 00:27:42.934 Get Log Page (02h): Supported 00:27:42.934 Identify (06h): Supported 00:27:42.934 Abort (08h): Supported 00:27:42.934 Set Features (09h): Supported 00:27:42.934 Get Features (0Ah): Supported 00:27:42.934 Asynchronous Event Request (0Ch): Supported 00:27:42.934 Keep Alive (18h): Supported 00:27:42.934 I/O Commands 00:27:42.934 ------------ 00:27:42.934 Flush (00h): Supported LBA-Change 00:27:42.934 Write (01h): Supported LBA-Change 00:27:42.934 Read (02h): Supported 00:27:42.934 Compare (05h): Supported 00:27:42.934 Write Zeroes (08h): Supported LBA-Change 00:27:42.934 Dataset Management (09h): Supported LBA-Change 00:27:42.934 Copy (19h): Supported LBA-Change 00:27:42.934 00:27:42.934 Error Log 00:27:42.934 ========= 00:27:42.934 00:27:42.934 Arbitration 00:27:42.934 =========== 00:27:42.934 Arbitration Burst: 1 00:27:42.934 00:27:42.934 Power Management 00:27:42.934 ================ 00:27:42.934 Number of Power States: 1 00:27:42.934 Current Power State: Power State #0 00:27:42.934 Power State #0: 00:27:42.934 Max Power: 0.00 W 00:27:42.934 Non-Operational State: Operational 00:27:42.934 Entry Latency: Not Reported 00:27:42.934 Exit Latency: Not Reported 00:27:42.934 Relative Read Throughput: 0 00:27:42.934 Relative Read Latency: 0 00:27:42.934 Relative Write Throughput: 0 00:27:42.934 Relative Write Latency: 0 00:27:42.934 Idle Power: Not Reported 00:27:42.934 Active Power: Not Reported 00:27:42.934 Non-Operational Permissive Mode: Not Supported 00:27:42.934 00:27:42.934 Health Information 00:27:42.934 ================== 00:27:42.934 Critical Warnings: 00:27:42.934 Available Spare Space: OK 00:27:42.934 Temperature: OK 00:27:42.934 Device Reliability: OK 00:27:42.934 Read Only: No 00:27:42.934 Volatile Memory Backup: OK 00:27:42.934 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:42.934 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:42.934 Available Spare: 0% 00:27:42.934 Available Spare Threshold: 0% 00:27:42.934 Life Percentage Used:[2024-07-12 13:35:40.198932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.198943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd3f630) 00:27:42.934 [2024-07-12 13:35:40.198953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.934 [2024-07-12 13:35:40.198974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8ea00, cid 7, qid 0 00:27:42.934 [2024-07-12 13:35:40.199172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.934 [2024-07-12 13:35:40.199188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.934 [2024-07-12 13:35:40.199195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8ea00) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199252] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:42.934 [2024-07-12 13:35:40.199275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8df80) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.934 [2024-07-12 13:35:40.199311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e100) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.934 [2024-07-12 13:35:40.199336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e280) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.934 [2024-07-12 13:35:40.199351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.934 [2024-07-12 13:35:40.199392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.934 [2024-07-12 13:35:40.199431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.934 [2024-07-12 13:35:40.199454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.934 [2024-07-12 13:35:40.199678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.934 [2024-07-12 13:35:40.199694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.934 [2024-07-12 13:35:40.199701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.934 [2024-07-12 13:35:40.199744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.934 [2024-07-12 13:35:40.199773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.934 [2024-07-12 13:35:40.199919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.934 [2024-07-12 13:35:40.199934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.934 [2024-07-12 13:35:40.199941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.199956] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:42.934 [2024-07-12 13:35:40.199967] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:42.934 [2024-07-12 13:35:40.199983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.199999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.934 [2024-07-12 13:35:40.200009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.934 [2024-07-12 13:35:40.200032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.934 [2024-07-12 13:35:40.200168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.934 [2024-07-12 13:35:40.200184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.934 [2024-07-12 13:35:40.200191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.200197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.934 [2024-07-12 13:35:40.200216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.200227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.934 [2024-07-12 13:35:40.200233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.200243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.200264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.200436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.200456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.200464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.200490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.200517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.200538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.200707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.200722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.200729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.200755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.200781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.200801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.200922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.200937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.200944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.200969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.200986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.200997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.201017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.201187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.201203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.201210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.201235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.201262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.201282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.201459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.201474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.201485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.201511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.201538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.201558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.201731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.201747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.201754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.201779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.201795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.201806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.201826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.201978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.201996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.202004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.202027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.202056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.202076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.202199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.202214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.202221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.202247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.202274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.202294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.202471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.202486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.202494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.202523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.202551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.202571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.202744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.202759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.202766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.202791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.202819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.202839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.202964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.202979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.202986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.202993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.203011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.203021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.203028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.203038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.203058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.203180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.203195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.203203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.203209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.935 [2024-07-12 13:35:40.203228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.203238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.935 [2024-07-12 13:35:40.203244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.935 [2024-07-12 13:35:40.203255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.935 [2024-07-12 13:35:40.203275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.935 [2024-07-12 13:35:40.207340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.935 [2024-07-12 13:35:40.207357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.935 [2024-07-12 13:35:40.207365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.936 [2024-07-12 13:35:40.207371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.936 [2024-07-12 13:35:40.207391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.936 [2024-07-12 13:35:40.207405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.936 [2024-07-12 13:35:40.207413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f630) 00:27:42.936 [2024-07-12 13:35:40.207423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.936 [2024-07-12 13:35:40.207445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8e400, cid 3, qid 0 00:27:42.936 [2024-07-12 13:35:40.207634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.936 [2024-07-12 13:35:40.207650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.936 [2024-07-12 13:35:40.207657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.936 [2024-07-12 13:35:40.207666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd8e400) on tqpair=0xd3f630 00:27:42.936 [2024-07-12 13:35:40.207683] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:42.936 0% 00:27:42.936 Data Units Read: 0 00:27:42.936 Data Units Written: 0 00:27:42.936 Host Read Commands: 0 00:27:42.936 Host Write Commands: 0 00:27:42.936 Controller Busy Time: 0 minutes 00:27:42.936 Power Cycles: 0 00:27:42.936 Power On Hours: 0 hours 00:27:42.936 Unsafe Shutdowns: 0 00:27:42.936 Unrecoverable Media Errors: 0 00:27:42.936 Lifetime Error Log Entries: 0 00:27:42.936 Warning Temperature Time: 0 minutes 00:27:42.936 Critical Temperature Time: 0 minutes 00:27:42.936 00:27:42.936 Number of Queues 00:27:42.936 ================ 00:27:42.936 Number of I/O Submission Queues: 127 00:27:42.936 Number of I/O Completion Queues: 127 00:27:42.936 00:27:42.936 Active Namespaces 00:27:42.936 ================= 00:27:42.936 Namespace ID:1 00:27:42.936 Error Recovery Timeout: Unlimited 00:27:42.936 Command Set Identifier: NVM (00h) 00:27:42.936 Deallocate: Supported 00:27:42.936 Deallocated/Unwritten Error: Not Supported 00:27:42.936 Deallocated Read Value: Unknown 00:27:42.936 Deallocate in Write Zeroes: Not Supported 00:27:42.936 Deallocated Guard Field: 0xFFFF 00:27:42.936 Flush: Supported 00:27:42.936 Reservation: Supported 00:27:42.936 Namespace Sharing Capabilities: Multiple Controllers 00:27:42.936 Size (in LBAs): 131072 (0GiB) 00:27:42.936 Capacity (in LBAs): 131072 (0GiB) 00:27:42.936 Utilization (in LBAs): 131072 (0GiB) 00:27:42.936 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:42.936 EUI64: ABCDEF0123456789 00:27:42.936 UUID: 121fe6dc-f2de-41fd-8134-8f735c82e926 00:27:42.936 Thin Provisioning: Not Supported 00:27:42.936 Per-NS Atomic Units: Yes 00:27:42.936 Atomic Boundary Size (Normal): 0 00:27:42.936 Atomic Boundary Size (PFail): 0 00:27:42.936 Atomic Boundary Offset: 0 00:27:42.936 Maximum Single Source Range Length: 65535 00:27:42.936 Maximum Copy Length: 65535 00:27:42.936 Maximum Source Range Count: 1 00:27:42.936 NGUID/EUI64 Never Reused: No 00:27:42.936 Namespace Write Protected: No 00:27:42.936 Number of LBA Formats: 1 00:27:42.936 Current LBA Format: LBA Format #00 00:27:42.936 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:42.936 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.936 rmmod nvme_tcp 00:27:42.936 rmmod nvme_fabrics 00:27:42.936 rmmod nvme_keyring 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3669170 ']' 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3669170 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3669170 ']' 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3669170 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3669170 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3669170' 00:27:42.936 killing process with pid 3669170 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3669170 00:27:42.936 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3669170 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.195 13:35:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.737 13:35:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.737 00:27:45.737 real 0m5.484s 00:27:45.737 user 0m4.799s 00:27:45.737 sys 0m1.908s 00:27:45.737 13:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:45.737 13:35:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:45.737 ************************************ 00:27:45.737 END TEST nvmf_identify 00:27:45.737 ************************************ 00:27:45.737 13:35:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:45.737 13:35:42 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:45.737 13:35:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:45.737 13:35:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.737 13:35:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.737 ************************************ 00:27:45.737 START TEST nvmf_perf 00:27:45.737 ************************************ 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:45.737 * Looking for test storage... 00:27:45.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:45.737 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:45.738 13:35:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:45.738 13:35:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:47.639 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:47.639 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:47.639 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:47.640 Found net devices under 0000:09:00.0: cvl_0_0 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:47.640 Found net devices under 0000:09:00.1: cvl_0_1 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:27:47.640 00:27:47.640 --- 10.0.0.2 ping statistics --- 00:27:47.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.640 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:47.640 00:27:47.640 --- 10.0.0.1 ping statistics --- 00:27:47.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.640 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3671233 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3671233 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3671233 ']' 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.640 13:35:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.640 [2024-07-12 13:35:44.839633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:27:47.640 [2024-07-12 13:35:44.839743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.640 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.640 [2024-07-12 13:35:44.879074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:47.640 [2024-07-12 13:35:44.905162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.640 [2024-07-12 13:35:44.994709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.640 [2024-07-12 13:35:44.994758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.640 [2024-07-12 13:35:44.994787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.640 [2024-07-12 13:35:44.994798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.640 [2024-07-12 13:35:44.994808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.640 [2024-07-12 13:35:44.994883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.640 [2024-07-12 13:35:44.994943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.640 [2024-07-12 13:35:44.995012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.640 [2024-07-12 13:35:44.995014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:47.898 13:35:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:51.176 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:51.176 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:51.176 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:27:51.176 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:51.434 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:51.434 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:27:51.434 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:51.434 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:51.434 13:35:48 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:51.725 [2024-07-12 13:35:49.068505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.725 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.985 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:51.985 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:52.243 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:52.243 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:52.501 13:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.759 [2024-07-12 13:35:50.056056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.759 13:35:50 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:53.017 13:35:50 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:27:53.017 13:35:50 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:53.017 13:35:50 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:53.017 13:35:50 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:54.402 Initializing NVMe Controllers 00:27:54.402 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:27:54.402 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:27:54.402 Initialization complete. Launching workers. 00:27:54.402 ======================================================== 00:27:54.402 Latency(us) 00:27:54.402 Device Information : IOPS MiB/s Average min max 00:27:54.402 PCIE (0000:0b:00.0) NSID 1 from core 0: 85324.80 333.30 374.48 42.51 6258.80 00:27:54.402 ======================================================== 00:27:54.402 Total : 85324.80 333.30 374.48 42.51 6258.80 00:27:54.402 00:27:54.402 13:35:51 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.402 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.778 Initializing NVMe Controllers 00:27:55.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:55.778 Initialization complete. Launching workers. 00:27:55.778 ======================================================== 00:27:55.778 Latency(us) 00:27:55.778 Device Information : IOPS MiB/s Average min max 00:27:55.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.66 0.37 10569.27 185.53 45715.37 00:27:55.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.82 0.20 19833.27 7960.10 47894.14 00:27:55.778 ======================================================== 00:27:55.778 Total : 146.48 0.57 13783.31 185.53 47894.14 00:27:55.778 00:27:55.778 13:35:53 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:55.778 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.151 Initializing NVMe Controllers 00:27:57.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:57.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:57.151 Initialization complete. Launching workers. 00:27:57.151 ======================================================== 00:27:57.151 Latency(us) 00:27:57.151 Device Information : IOPS MiB/s Average min max 00:27:57.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8334.98 32.56 3856.21 442.05 8755.45 00:27:57.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3890.99 15.20 8273.91 6795.34 16064.43 00:27:57.152 ======================================================== 00:27:57.152 Total : 12225.98 47.76 5262.17 442.05 16064.43 00:27:57.152 00:27:57.152 13:35:54 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:57.152 13:35:54 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:57.152 13:35:54 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:57.152 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.682 Initializing NVMe Controllers 00:27:59.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.682 Controller IO queue size 128, less than required. 00:27:59.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.682 Controller IO queue size 128, less than required. 00:27:59.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.682 Initialization complete. Launching workers. 00:27:59.682 ======================================================== 00:27:59.682 Latency(us) 00:27:59.682 Device Information : IOPS MiB/s Average min max 00:27:59.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.74 298.44 109330.24 66949.62 197665.48 00:27:59.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.87 154.22 212814.24 70605.33 304264.33 00:27:59.683 ======================================================== 00:27:59.683 Total : 1810.61 452.65 144586.80 66949.62 304264.33 00:27:59.683 00:27:59.683 13:35:56 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:59.683 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.683 No valid NVMe controllers or AIO or URING devices found 00:27:59.683 Initializing NVMe Controllers 00:27:59.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.683 Controller IO queue size 128, less than required. 00:27:59.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.683 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:59.683 Controller IO queue size 128, less than required. 00:27:59.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.683 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:59.683 WARNING: Some requested NVMe devices were skipped 00:27:59.683 13:35:57 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:59.683 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.210 Initializing NVMe Controllers 00:28:02.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.210 Controller IO queue size 128, less than required. 00:28:02.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.210 Controller IO queue size 128, less than required. 00:28:02.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:02.210 Initialization complete. Launching workers. 00:28:02.210 00:28:02.210 ==================== 00:28:02.210 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:02.210 TCP transport: 00:28:02.210 polls: 31432 00:28:02.210 idle_polls: 8713 00:28:02.210 sock_completions: 22719 00:28:02.210 nvme_completions: 4891 00:28:02.210 submitted_requests: 7290 00:28:02.210 queued_requests: 1 00:28:02.210 00:28:02.210 ==================== 00:28:02.210 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:02.210 TCP transport: 00:28:02.210 polls: 37470 00:28:02.210 idle_polls: 18678 00:28:02.210 sock_completions: 18792 00:28:02.210 nvme_completions: 2655 00:28:02.210 submitted_requests: 3930 00:28:02.210 queued_requests: 1 00:28:02.210 ======================================================== 00:28:02.210 Latency(us) 00:28:02.210 Device Information : IOPS MiB/s Average min max 00:28:02.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1222.23 305.56 107075.76 61501.43 147970.08 00:28:02.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.35 165.84 199963.48 89288.64 322103.53 00:28:02.210 ======================================================== 00:28:02.210 Total : 1885.58 471.39 139753.91 61501.43 322103.53 00:28:02.210 00:28:02.468 13:35:59 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:02.468 13:35:59 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.725 13:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:02.725 13:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:28:02.725 13:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d42ad475-77ba-4ff5-a542-9c842ae53791 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d42ad475-77ba-4ff5-a542-9c842ae53791 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d42ad475-77ba-4ff5-a542-9c842ae53791 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:06.002 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:06.259 { 00:28:06.259 "uuid": "d42ad475-77ba-4ff5-a542-9c842ae53791", 00:28:06.259 "name": "lvs_0", 00:28:06.259 "base_bdev": "Nvme0n1", 00:28:06.259 "total_data_clusters": 238234, 00:28:06.259 "free_clusters": 238234, 00:28:06.259 "block_size": 512, 00:28:06.259 "cluster_size": 4194304 00:28:06.259 } 00:28:06.259 ]' 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d42ad475-77ba-4ff5-a542-9c842ae53791") .free_clusters' 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d42ad475-77ba-4ff5-a542-9c842ae53791") .cluster_size' 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:06.259 952936 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:06.259 13:36:03 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d42ad475-77ba-4ff5-a542-9c842ae53791 lbd_0 20480 00:28:06.823 13:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c58605c4-1072-441c-8ea7-1e34d4c47274 00:28:06.823 13:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c58605c4-1072-441c-8ea7-1e34d4c47274 lvs_n_0 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2f9e2d36-b6e5-460a-9425-827412fec150 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2f9e2d36-b6e5-460a-9425-827412fec150 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2f9e2d36-b6e5-460a-9425-827412fec150 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:07.755 13:36:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:07.755 { 00:28:07.755 "uuid": "d42ad475-77ba-4ff5-a542-9c842ae53791", 00:28:07.755 "name": "lvs_0", 00:28:07.755 "base_bdev": "Nvme0n1", 00:28:07.755 "total_data_clusters": 238234, 00:28:07.755 "free_clusters": 233114, 00:28:07.755 "block_size": 512, 00:28:07.755 "cluster_size": 4194304 00:28:07.755 }, 00:28:07.755 { 00:28:07.755 "uuid": "2f9e2d36-b6e5-460a-9425-827412fec150", 00:28:07.755 "name": "lvs_n_0", 00:28:07.755 "base_bdev": "c58605c4-1072-441c-8ea7-1e34d4c47274", 00:28:07.755 "total_data_clusters": 5114, 00:28:07.755 "free_clusters": 5114, 00:28:07.755 "block_size": 512, 00:28:07.755 "cluster_size": 4194304 00:28:07.755 } 00:28:07.755 ]' 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2f9e2d36-b6e5-460a-9425-827412fec150") .free_clusters' 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2f9e2d36-b6e5-460a-9425-827412fec150") .cluster_size' 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:07.755 20456 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:07.755 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f9e2d36-b6e5-460a-9425-827412fec150 lbd_nest_0 20456 00:28:08.319 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f7e28592-9981-4316-a665-231d0e6032a8 00:28:08.319 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.319 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:08.319 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f7e28592-9981-4316-a665-231d0e6032a8 00:28:08.576 13:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.833 13:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:08.833 13:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:08.833 13:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:08.833 13:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:08.833 13:36:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.833 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.268 Initializing NVMe Controllers 00:28:21.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.268 Initialization complete. Launching workers. 00:28:21.268 ======================================================== 00:28:21.268 Latency(us) 00:28:21.268 Device Information : IOPS MiB/s Average min max 00:28:21.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.99 0.02 22732.48 215.03 47615.08 00:28:21.268 ======================================================== 00:28:21.268 Total : 43.99 0.02 22732.48 215.03 47615.08 00:28:21.268 00:28:21.268 13:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:21.268 13:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.268 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.261 Initializing NVMe Controllers 00:28:31.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.261 Initialization complete. Launching workers. 00:28:31.261 ======================================================== 00:28:31.261 Latency(us) 00:28:31.261 Device Information : IOPS MiB/s Average min max 00:28:31.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.80 9.72 12863.90 6044.73 47900.01 00:28:31.261 ======================================================== 00:28:31.261 Total : 77.80 9.72 12863.90 6044.73 47900.01 00:28:31.261 00:28:31.261 13:36:26 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:31.261 13:36:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.261 13:36:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.261 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.220 Initializing NVMe Controllers 00:28:41.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.220 Initialization complete. Launching workers. 00:28:41.220 ======================================================== 00:28:41.220 Latency(us) 00:28:41.220 Device Information : IOPS MiB/s Average min max 00:28:41.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6442.40 3.15 4973.76 299.77 43077.66 00:28:41.220 ======================================================== 00:28:41.220 Total : 6442.40 3.15 4973.76 299.77 43077.66 00:28:41.220 00:28:41.220 13:36:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:41.220 13:36:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.220 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.180 Initializing NVMe Controllers 00:28:51.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.180 Initialization complete. Launching workers. 00:28:51.180 ======================================================== 00:28:51.180 Latency(us) 00:28:51.180 Device Information : IOPS MiB/s Average min max 00:28:51.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2149.70 268.71 14889.61 1284.81 31811.31 00:28:51.180 ======================================================== 00:28:51.180 Total : 2149.70 268.71 14889.61 1284.81 31811.31 00:28:51.180 00:28:51.180 13:36:47 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:51.180 13:36:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:51.180 13:36:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.180 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.139 Initializing NVMe Controllers 00:29:01.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.139 Controller IO queue size 128, less than required. 00:29:01.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.139 Initialization complete. Launching workers. 00:29:01.139 ======================================================== 00:29:01.139 Latency(us) 00:29:01.139 Device Information : IOPS MiB/s Average min max 00:29:01.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11915.23 5.82 10745.52 1519.71 25042.99 00:29:01.139 ======================================================== 00:29:01.139 Total : 11915.23 5.82 10745.52 1519.71 25042.99 00:29:01.139 00:29:01.139 13:36:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:01.139 13:36:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.105 Initializing NVMe Controllers 00:29:11.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.105 Controller IO queue size 128, less than required. 00:29:11.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.105 Initialization complete. Launching workers. 00:29:11.105 ======================================================== 00:29:11.105 Latency(us) 00:29:11.105 Device Information : IOPS MiB/s Average min max 00:29:11.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.72 149.47 107217.10 23992.97 210494.93 00:29:11.105 ======================================================== 00:29:11.105 Total : 1195.72 149.47 107217.10 23992.97 210494.93 00:29:11.105 00:29:11.105 13:37:08 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.105 13:37:08 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7e28592-9981-4316-a665-231d0e6032a8 00:29:12.038 13:37:09 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:12.038 13:37:09 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c58605c4-1072-441c-8ea7-1e34d4c47274 00:29:12.296 13:37:09 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.553 13:37:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.553 rmmod nvme_tcp 00:29:12.553 rmmod nvme_fabrics 00:29:12.553 rmmod nvme_keyring 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3671233 ']' 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3671233 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3671233 ']' 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3671233 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3671233 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3671233' 00:29:12.812 killing process with pid 3671233 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3671233 00:29:12.812 13:37:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3671233 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.187 13:37:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.724 13:37:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.724 00:29:16.724 real 1m30.973s 00:29:16.724 user 5m30.094s 00:29:16.724 sys 0m15.777s 00:29:16.724 13:37:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:16.724 13:37:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:16.724 ************************************ 00:29:16.724 END TEST nvmf_perf 00:29:16.724 ************************************ 00:29:16.724 13:37:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:16.724 13:37:13 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:16.724 13:37:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:16.724 13:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.724 13:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.724 ************************************ 00:29:16.724 START TEST nvmf_fio_host 00:29:16.724 ************************************ 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:16.725 * Looking for test storage... 00:29:16.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.725 13:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.628 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:18.629 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:18.629 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:18.629 Found net devices under 0000:09:00.0: cvl_0_0 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:18.629 Found net devices under 0000:09:00.1: cvl_0_1 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:29:18.629 00:29:18.629 --- 10.0.0.2 ping statistics --- 00:29:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.629 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:29:18.629 00:29:18.629 --- 10.0.0.1 ping statistics --- 00:29:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.629 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3683940 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3683940 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3683940 ']' 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.629 13:37:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.629 [2024-07-12 13:37:16.002616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:29:18.629 [2024-07-12 13:37:16.002689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.629 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.629 [2024-07-12 13:37:16.039183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:18.629 [2024-07-12 13:37:16.066252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.894 [2024-07-12 13:37:16.153139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.894 [2024-07-12 13:37:16.153192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.894 [2024-07-12 13:37:16.153221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.894 [2024-07-12 13:37:16.153232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.894 [2024-07-12 13:37:16.153242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.894 [2024-07-12 13:37:16.153347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.894 [2024-07-12 13:37:16.153376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.894 [2024-07-12 13:37:16.153438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.894 [2024-07-12 13:37:16.153441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.894 13:37:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.894 13:37:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:18.894 13:37:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:19.215 [2024-07-12 13:37:16.486521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.215 13:37:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:19.215 13:37:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.215 13:37:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.215 13:37:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:19.499 Malloc1 00:29:19.500 13:37:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.757 13:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:20.015 13:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.272 [2024-07-12 13:37:17.525985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.272 13:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.527 13:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:20.528 13:37:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:20.528 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:20.528 fio-3.35 00:29:20.528 Starting 1 thread 00:29:20.784 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.306 00:29:23.306 test: (groupid=0, jobs=1): err= 0: pid=3684302: Fri Jul 12 13:37:20 2024 00:29:23.306 read: IOPS=8081, BW=31.6MiB/s (33.1MB/s)(63.4MiB/2007msec) 00:29:23.306 slat (nsec): min=1983, max=114824, avg=2608.24, stdev=1413.93 00:29:23.306 clat (usec): min=2792, max=14819, avg=8760.21, stdev=664.92 00:29:23.306 lat (usec): min=2815, max=14821, avg=8762.81, stdev=664.86 00:29:23.306 clat percentiles (usec): 00:29:23.306 | 1.00th=[ 7308], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8225], 00:29:23.306 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:29:23.306 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:29:23.306 | 99.00th=[10290], 99.50th=[10552], 99.90th=[12911], 99.95th=[13829], 00:29:23.306 | 99.99th=[14746] 00:29:23.306 bw ( KiB/s): min=31680, max=33168, per=99.99%, avg=32324.00, stdev=620.04, samples=4 00:29:23.306 iops : min= 7920, max= 8292, avg=8081.00, stdev=155.01, samples=4 00:29:23.306 write: IOPS=8074, BW=31.5MiB/s (33.1MB/s)(63.3MiB/2007msec); 0 zone resets 00:29:23.306 slat (usec): min=2, max=135, avg= 2.73, stdev= 1.41 00:29:23.306 clat (usec): min=1463, max=13728, avg=7033.54, stdev=603.43 00:29:23.306 lat (usec): min=1469, max=13731, avg=7036.27, stdev=603.48 00:29:23.306 clat percentiles (usec): 00:29:23.306 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:29:23.306 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:29:23.306 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:29:23.306 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[12256], 99.95th=[12780], 00:29:23.306 | 99.99th=[13173] 00:29:23.306 bw ( KiB/s): min=31744, max=32584, per=99.92%, avg=32274.00, stdev=400.50, samples=4 00:29:23.306 iops : min= 7936, max= 8146, avg=8068.50, stdev=100.12, samples=4 00:29:23.306 lat (msec) : 2=0.01%, 4=0.11%, 10=98.62%, 20=1.27% 00:29:23.306 cpu : usr=57.88%, sys=37.24%, ctx=88, majf=0, minf=39 00:29:23.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:23.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:23.306 issued rwts: total=16220,16206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:23.306 00:29:23.306 Run status group 0 (all jobs): 00:29:23.306 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.4MiB (66.4MB), run=2007-2007msec 00:29:23.306 WRITE: bw=31.5MiB/s (33.1MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.4MB), run=2007-2007msec 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:23.306 13:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:23.306 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:23.306 fio-3.35 00:29:23.306 Starting 1 thread 00:29:23.306 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.827 00:29:25.827 test: (groupid=0, jobs=1): err= 0: pid=3684631: Fri Jul 12 13:37:23 2024 00:29:25.827 read: IOPS=7840, BW=123MiB/s (128MB/s)(246MiB/2006msec) 00:29:25.827 slat (usec): min=2, max=101, avg= 3.89, stdev= 1.97 00:29:25.827 clat (usec): min=2646, max=54629, avg=9513.02, stdev=3626.19 00:29:25.827 lat (usec): min=2650, max=54633, avg=9516.92, stdev=3626.21 00:29:25.827 clat percentiles (usec): 00:29:25.827 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7504], 00:29:25.827 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:29:25.827 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12125], 95.00th=[13042], 00:29:25.827 | 99.00th=[15533], 99.50th=[17171], 99.90th=[53740], 99.95th=[54264], 00:29:25.827 | 99.99th=[54264] 00:29:25.827 bw ( KiB/s): min=54656, max=72367, per=50.62%, avg=63499.75, stdev=7239.74, samples=4 00:29:25.827 iops : min= 3416, max= 4522, avg=3968.50, stdev=452.10, samples=4 00:29:25.827 write: IOPS=4447, BW=69.5MiB/s (72.9MB/s)(130MiB/1871msec); 0 zone resets 00:29:25.827 slat (usec): min=30, max=140, avg=34.63, stdev= 6.04 00:29:25.827 clat (usec): min=4889, max=60959, avg=12045.52, stdev=3981.08 00:29:25.827 lat (usec): min=4921, max=61007, avg=12080.15, stdev=3981.05 00:29:25.827 clat percentiles (usec): 00:29:25.827 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:29:25.827 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:29:25.827 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14877], 95.00th=[15664], 00:29:25.827 | 99.00th=[17957], 99.50th=[54264], 99.90th=[58459], 99.95th=[58459], 00:29:25.827 | 99.99th=[61080] 00:29:25.827 bw ( KiB/s): min=56896, max=75273, per=92.91%, avg=66114.25, stdev=7633.97, samples=4 00:29:25.827 iops : min= 3556, max= 4704, avg=4132.00, stdev=476.90, samples=4 00:29:25.827 lat (msec) : 4=0.07%, 10=49.84%, 20=49.56%, 50=0.12%, 100=0.41% 00:29:25.827 cpu : usr=72.52%, sys=23.74%, ctx=34, majf=0, minf=55 00:29:25.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:25.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:25.827 issued rwts: total=15728,8321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:25.827 00:29:25.827 Run status group 0 (all jobs): 00:29:25.827 READ: bw=123MiB/s (128MB/s), 123MiB/s-123MiB/s (128MB/s-128MB/s), io=246MiB (258MB), run=2006-2006msec 00:29:25.827 WRITE: bw=69.5MiB/s (72.9MB/s), 69.5MiB/s-69.5MiB/s (72.9MB/s-72.9MB/s), io=130MiB (136MB), run=1871-1871msec 00:29:25.827 13:37:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.827 13:37:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:25.827 13:37:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:29:26.084 13:37:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:29:29.361 Nvme0n1 00:29:29.361 13:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c24a66d9-878b-49a5-bd98-d1bcf414eed1 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c24a66d9-878b-49a5-bd98-d1bcf414eed1 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c24a66d9-878b-49a5-bd98-d1bcf414eed1 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:31.888 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:32.145 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:32.145 { 00:29:32.145 "uuid": "c24a66d9-878b-49a5-bd98-d1bcf414eed1", 00:29:32.145 "name": "lvs_0", 00:29:32.145 "base_bdev": "Nvme0n1", 00:29:32.145 "total_data_clusters": 930, 00:29:32.145 "free_clusters": 930, 00:29:32.145 "block_size": 512, 00:29:32.145 "cluster_size": 1073741824 00:29:32.145 } 00:29:32.145 ]' 00:29:32.145 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c24a66d9-878b-49a5-bd98-d1bcf414eed1") .free_clusters' 00:29:32.145 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:32.402 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c24a66d9-878b-49a5-bd98-d1bcf414eed1") .cluster_size' 00:29:32.402 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:32.402 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:32.402 13:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:32.402 952320 00:29:32.402 13:37:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:32.659 a7411262-8c94-4623-9c2d-ce3f894ec73d 00:29:32.659 13:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:32.917 13:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:33.175 13:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:33.433 13:37:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:33.690 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:33.690 fio-3.35 00:29:33.690 Starting 1 thread 00:29:33.690 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.249 00:29:36.249 test: (groupid=0, jobs=1): err= 0: pid=3685919: Fri Jul 12 13:37:33 2024 00:29:36.249 read: IOPS=6151, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec) 00:29:36.249 slat (usec): min=2, max=147, avg= 2.68, stdev= 1.85 00:29:36.249 clat (usec): min=873, max=171502, avg=11470.35, stdev=11538.71 00:29:36.249 lat (usec): min=877, max=171541, avg=11473.03, stdev=11538.96 00:29:36.249 clat percentiles (msec): 00:29:36.249 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:29:36.249 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:36.249 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:29:36.249 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:36.249 | 99.99th=[ 171] 00:29:36.249 bw ( KiB/s): min=17224, max=27176, per=99.85%, avg=24568.00, stdev=4897.32, samples=4 00:29:36.249 iops : min= 4306, max= 6794, avg=6142.00, stdev=1224.33, samples=4 00:29:36.249 write: IOPS=6135, BW=24.0MiB/s (25.1MB/s)(48.1MiB/2008msec); 0 zone resets 00:29:36.249 slat (nsec): min=2216, max=95091, avg=2774.95, stdev=1259.61 00:29:36.249 clat (usec): min=364, max=169257, avg=9206.54, stdev=10815.30 00:29:36.249 lat (usec): min=368, max=169263, avg=9209.31, stdev=10815.50 00:29:36.249 clat percentiles (msec): 00:29:36.249 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:29:36.249 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:36.249 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:36.249 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:29:36.249 | 99.99th=[ 169] 00:29:36.249 bw ( KiB/s): min=18280, max=26672, per=99.92%, avg=24522.00, stdev=4162.04, samples=4 00:29:36.249 iops : min= 4570, max= 6668, avg=6130.50, stdev=1040.51, samples=4 00:29:36.249 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:36.249 lat (msec) : 2=0.03%, 4=0.12%, 10=59.85%, 20=39.46%, 250=0.52% 00:29:36.249 cpu : usr=54.61%, sys=41.31%, ctx=90, majf=0, minf=39 00:29:36.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:36.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.249 issued rwts: total=12352,12320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.249 00:29:36.249 Run status group 0 (all jobs): 00:29:36.249 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.6MB), run=2008-2008msec 00:29:36.249 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.5MB), run=2008-2008msec 00:29:36.249 13:37:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:36.249 13:37:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3975e40d-d531-4cf0-bd42-c5162b6c7cd0 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3975e40d-d531-4cf0-bd42-c5162b6c7cd0 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3975e40d-d531-4cf0-bd42-c5162b6c7cd0 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:37.622 13:37:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:37.622 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:37.622 { 00:29:37.622 "uuid": "c24a66d9-878b-49a5-bd98-d1bcf414eed1", 00:29:37.622 "name": "lvs_0", 00:29:37.622 "base_bdev": "Nvme0n1", 00:29:37.622 "total_data_clusters": 930, 00:29:37.622 "free_clusters": 0, 00:29:37.622 "block_size": 512, 00:29:37.622 "cluster_size": 1073741824 00:29:37.622 }, 00:29:37.622 { 00:29:37.622 "uuid": "3975e40d-d531-4cf0-bd42-c5162b6c7cd0", 00:29:37.622 "name": "lvs_n_0", 00:29:37.622 "base_bdev": "a7411262-8c94-4623-9c2d-ce3f894ec73d", 00:29:37.622 "total_data_clusters": 237847, 00:29:37.622 "free_clusters": 237847, 00:29:37.622 "block_size": 512, 00:29:37.622 "cluster_size": 4194304 00:29:37.622 } 00:29:37.622 ]' 00:29:37.622 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3975e40d-d531-4cf0-bd42-c5162b6c7cd0") .free_clusters' 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3975e40d-d531-4cf0-bd42-c5162b6c7cd0") .cluster_size' 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:37.879 951388 00:29:37.879 13:37:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:38.445 f83f21c3-1262-493b-a2f1-0ef49354a22e 00:29:38.445 13:37:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:38.702 13:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:38.960 13:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:39.217 13:37:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:39.475 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:39.475 fio-3.35 00:29:39.475 Starting 1 thread 00:29:39.475 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.003 00:29:42.003 test: (groupid=0, jobs=1): err= 0: pid=3686645: Fri Jul 12 13:37:39 2024 00:29:42.003 read: IOPS=5645, BW=22.1MiB/s (23.1MB/s)(45.2MiB/2050msec) 00:29:42.003 slat (nsec): min=1994, max=166600, avg=2644.28, stdev=2243.13 00:29:42.003 clat (usec): min=4121, max=61297, avg=12589.10, stdev=3702.08 00:29:42.003 lat (usec): min=4154, max=61300, avg=12591.75, stdev=3702.05 00:29:42.004 clat percentiles (usec): 00:29:42.004 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:29:42.004 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:29:42.004 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:29:42.004 | 99.00th=[14746], 99.50th=[53216], 99.90th=[60031], 99.95th=[61080], 00:29:42.004 | 99.99th=[61080] 00:29:42.004 bw ( KiB/s): min=22080, max=23568, per=100.00%, avg=22996.00, stdev=639.65, samples=4 00:29:42.004 iops : min= 5520, max= 5892, avg=5749.00, stdev=159.91, samples=4 00:29:42.004 write: IOPS=5632, BW=22.0MiB/s (23.1MB/s)(45.1MiB/2050msec); 0 zone resets 00:29:42.004 slat (usec): min=2, max=131, avg= 2.76, stdev= 1.72 00:29:42.004 clat (usec): min=3091, max=59708, avg=9994.03, stdev=3140.39 00:29:42.004 lat (usec): min=3100, max=59711, avg=9996.79, stdev=3140.38 00:29:42.004 clat percentiles (usec): 00:29:42.004 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:29:42.004 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:29:42.004 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:29:42.004 | 99.00th=[11863], 99.50th=[12125], 99.90th=[57934], 99.95th=[58983], 00:29:42.004 | 99.99th=[59507] 00:29:42.004 bw ( KiB/s): min=22912, max=23104, per=100.00%, avg=22992.00, stdev=80.53, samples=4 00:29:42.004 iops : min= 5728, max= 5776, avg=5748.00, stdev=20.13, samples=4 00:29:42.004 lat (msec) : 4=0.03%, 10=30.01%, 20=69.40%, 100=0.55% 00:29:42.004 cpu : usr=53.78%, sys=42.56%, ctx=128, majf=0, minf=39 00:29:42.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:42.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:42.004 issued rwts: total=11574,11547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:42.004 00:29:42.004 Run status group 0 (all jobs): 00:29:42.004 READ: bw=22.1MiB/s (23.1MB/s), 22.1MiB/s-22.1MiB/s (23.1MB/s-23.1MB/s), io=45.2MiB (47.4MB), run=2050-2050msec 00:29:42.004 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=45.1MiB (47.3MB), run=2050-2050msec 00:29:42.004 13:37:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:42.004 13:37:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:42.004 13:37:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:46.182 13:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:46.182 13:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:49.456 13:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:49.456 13:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:51.353 rmmod nvme_tcp 00:29:51.353 rmmod nvme_fabrics 00:29:51.353 rmmod nvme_keyring 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3683940 ']' 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3683940 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3683940 ']' 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3683940 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3683940 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:51.353 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3683940' 00:29:51.354 killing process with pid 3683940 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3683940 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3683940 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.354 13:37:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.885 13:37:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:53.885 00:29:53.885 real 0m37.144s 00:29:53.885 user 2m21.475s 00:29:53.885 sys 0m7.474s 00:29:53.885 13:37:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.885 13:37:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.885 ************************************ 00:29:53.885 END TEST nvmf_fio_host 00:29:53.885 ************************************ 00:29:53.885 13:37:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:53.885 13:37:50 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:53.885 13:37:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:53.885 13:37:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.885 13:37:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.885 ************************************ 00:29:53.885 START TEST nvmf_failover 00:29:53.885 ************************************ 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:53.885 * Looking for test storage... 00:29:53.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.885 13:37:50 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:53.886 13:37:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:55.778 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:55.778 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:55.778 Found net devices under 0000:09:00.0: cvl_0_0 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:55.778 Found net devices under 0000:09:00.1: cvl_0_1 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:55.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:29:55.778 00:29:55.778 --- 10.0.0.2 ping statistics --- 00:29:55.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.778 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:55.778 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:55.778 00:29:55.778 --- 10.0.0.1 ping statistics --- 00:29:55.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.779 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3690005 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3690005 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3690005 ']' 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:55.779 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:56.035 [2024-07-12 13:37:53.264613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:29:56.035 [2024-07-12 13:37:53.264698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.035 [2024-07-12 13:37:53.300647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:56.035 [2024-07-12 13:37:53.327806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:56.035 [2024-07-12 13:37:53.413746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.035 [2024-07-12 13:37:53.413798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.035 [2024-07-12 13:37:53.413811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.035 [2024-07-12 13:37:53.413822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.035 [2024-07-12 13:37:53.413832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.035 [2024-07-12 13:37:53.413978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.035 [2024-07-12 13:37:53.414045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.035 [2024-07-12 13:37:53.414048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.292 13:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:56.548 [2024-07-12 13:37:53.777434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.548 13:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:56.804 Malloc0 00:29:56.804 13:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.060 13:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.316 13:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.574 [2024-07-12 13:37:54.895757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.574 13:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:57.831 [2024-07-12 13:37:55.156576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:57.831 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:58.108 [2024-07-12 13:37:55.449686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3690294 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3690294 /var/tmp/bdevperf.sock 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3690294 ']' 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:58.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:58.108 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:58.366 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.366 13:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:58.366 13:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:58.931 NVMe0n1 00:29:58.931 13:37:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.189 00:29:59.189 13:37:56 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3690434 00:29:59.189 13:37:56 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:59.189 13:37:56 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:00.124 13:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.382 [2024-07-12 13:37:57.760953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 [2024-07-12 13:37:57.761329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc5b0 is same with the state(5) to be set 00:30:00.382 13:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:03.662 13:38:00 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:03.920 00:30:03.920 13:38:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:04.177 [2024-07-12 13:38:01.432655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 [2024-07-12 13:38:01.432705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 [2024-07-12 13:38:01.432719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 [2024-07-12 13:38:01.432732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 [2024-07-12 13:38:01.432744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 [2024-07-12 13:38:01.432763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dd970 is same with the state(5) to be set 00:30:04.177 13:38:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:07.460 13:38:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.460 [2024-07-12 13:38:04.734239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.460 13:38:04 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:08.394 13:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:08.652 [2024-07-12 13:38:05.998989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.652 [2024-07-12 13:38:05.999232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 [2024-07-12 13:38:05.999514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de050 is same with the state(5) to be set 00:30:08.653 13:38:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3690434 00:30:15.218 0 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3690294 ']' 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3690294' 00:30:15.218 killing process with pid 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3690294 00:30:15.218 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:15.218 [2024-07-12 13:37:55.516049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:30:15.218 [2024-07-12 13:37:55.516125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690294 ] 00:30:15.218 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.218 [2024-07-12 13:37:55.548526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:15.218 [2024-07-12 13:37:55.577851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.218 [2024-07-12 13:37:55.668068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.218 Running I/O for 15 seconds... 00:30:15.218 [2024-07-12 13:37:57.761772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.761976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.761991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.218 [2024-07-12 13:37:57.762201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.218 [2024-07-12 13:37:57.762801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.218 [2024-07-12 13:37:57.762814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.762984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.762998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.219 [2024-07-12 13:37:57.763922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.763977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.764023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.219 [2024-07-12 13:37:57.764036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.219 [2024-07-12 13:37:57.764050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.764976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.764990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.765017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.765074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.220 [2024-07-12 13:37:57.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.220 [2024-07-12 13:37:57.765294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.220 [2024-07-12 13:37:57.765307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:37:57.765576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.221 [2024-07-12 13:37:57.765620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.221 [2024-07-12 13:37:57.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71592 len:8 PRP1 0x0 PRP2 0x0 00:30:15.221 [2024-07-12 13:37:57.765645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765702] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1153f10 was disconnected and freed. reset controller. 00:30:15.221 [2024-07-12 13:37:57.765720] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:15.221 [2024-07-12 13:37:57.765752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:37:57.765770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:37:57.765799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:37:57.765891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:37:57.765918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:37:57.765931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.221 [2024-07-12 13:37:57.769191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.221 [2024-07-12 13:37:57.769229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112d850 (9): Bad file descriptor 00:30:15.221 [2024-07-12 13:37:57.799275] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:15.221 [2024-07-12 13:38:01.432596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:38:01.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.432672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:38:01.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.432727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:38:01.432741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.432756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.221 [2024-07-12 13:38:01.432769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.432782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d850 is same with the state(5) to be set 00:30:15.221 [2024-07-12 13:38:01.433117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.221 [2024-07-12 13:38:01.433701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.221 [2024-07-12 13:38:01.433718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.433983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.222 [2024-07-12 13:38:01.434345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.222 [2024-07-12 13:38:01.434760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.222 [2024-07-12 13:38:01.434775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.434970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.434985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.223 [2024-07-12 13:38:01.435573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.435972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.435986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.436001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.436015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.223 [2024-07-12 13:38:01.436030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.223 [2024-07-12 13:38:01.436044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.224 [2024-07-12 13:38:01.436765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:01.436940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.436968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.224 [2024-07-12 13:38:01.436983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.224 [2024-07-12 13:38:01.436995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73160 len:8 PRP1 0x0 PRP2 0x0 00:30:15.224 [2024-07-12 13:38:01.437009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:01.437066] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12f8470 was disconnected and freed. reset controller. 00:30:15.224 [2024-07-12 13:38:01.437084] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:15.224 [2024-07-12 13:38:01.437099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.224 [2024-07-12 13:38:01.440350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.224 [2024-07-12 13:38:01.440388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112d850 (9): Bad file descriptor 00:30:15.224 [2024-07-12 13:38:01.522730] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:15.224 [2024-07-12 13:38:05.996530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.224 [2024-07-12 13:38:05.996586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:05.996605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.224 [2024-07-12 13:38:05.996619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:05.996633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.224 [2024-07-12 13:38:05.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:05.996661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.224 [2024-07-12 13:38:05.996675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:05.996688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112d850 is same with the state(5) to be set 00:30:15.224 [2024-07-12 13:38:06.000450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:06.000476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:06.000503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:06.000519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:06.000536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:06.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:06.000565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.224 [2024-07-12 13:38:06.000579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.224 [2024-07-12 13:38:06.000594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.225 [2024-07-12 13:38:06.000937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.000964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.000979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.000992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.225 [2024-07-12 13:38:06.001873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.225 [2024-07-12 13:38:06.001888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.001902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.001917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.001930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.001945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.001958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.001972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.001993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.226 [2024-07-12 13:38:06.002634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.226 [2024-07-12 13:38:06.002976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.226 [2024-07-12 13:38:06.002991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.227 [2024-07-12 13:38:06.003004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.227 [2024-07-12 13:38:06.003032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.227 [2024-07-12 13:38:06.003061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.227 [2024-07-12 13:38:06.003088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:15.227 [2024-07-12 13:38:06.004258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.227 [2024-07-12 13:38:06.004287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:15.228 [2024-07-12 13:38:06.004310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:15.228 [2024-07-12 13:38:06.004334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:30:15.228 [2024-07-12 13:38:06.004348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.228 [2024-07-12 13:38:06.004406] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x115d5c0 was disconnected and freed. reset controller. 00:30:15.228 [2024-07-12 13:38:06.004425] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:15.228 [2024-07-12 13:38:06.004439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.228 [2024-07-12 13:38:06.007701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.228 [2024-07-12 13:38:06.007740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112d850 (9): Bad file descriptor 00:30:15.228 [2024-07-12 13:38:06.083734] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:15.228 00:30:15.228 Latency(us) 00:30:15.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.228 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:15.228 Verification LBA range: start 0x0 length 0x4000 00:30:15.228 NVMe0n1 : 15.00 8087.95 31.59 486.11 0.00 14901.08 813.13 16311.18 00:30:15.228 =================================================================================================================== 00:30:15.228 Total : 8087.95 31.59 486.11 0.00 14901.08 813.13 16311.18 00:30:15.228 Received shutdown signal, test time was about 15.000000 seconds 00:30:15.228 00:30:15.228 Latency(us) 00:30:15.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.228 =================================================================================================================== 00:30:15.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3692155 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3692155 /var/tmp/bdevperf.sock 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3692155 ']' 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:15.228 13:38:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:15.228 13:38:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.228 13:38:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:15.228 13:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:15.228 [2024-07-12 13:38:12.453413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.228 13:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:15.484 [2024-07-12 13:38:12.690035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:15.484 13:38:12 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.740 NVMe0n1 00:30:15.740 13:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:16.305 00:30:16.305 13:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:16.871 00:30:16.871 13:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:16.871 13:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:16.871 13:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.129 13:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:20.433 13:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:20.433 13:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:20.433 13:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3692941 00:30:20.433 13:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:20.433 13:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3692941 00:30:21.807 0 00:30:21.807 13:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.807 [2024-07-12 13:38:11.980930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:30:21.807 [2024-07-12 13:38:11.981020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692155 ] 00:30:21.807 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.807 [2024-07-12 13:38:12.013185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:21.807 [2024-07-12 13:38:12.041636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.807 [2024-07-12 13:38:12.124142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.807 [2024-07-12 13:38:14.567749] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:21.807 [2024-07-12 13:38:14.567833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.807 [2024-07-12 13:38:14.567855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.807 [2024-07-12 13:38:14.567885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.807 [2024-07-12 13:38:14.567899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.807 [2024-07-12 13:38:14.567913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.807 [2024-07-12 13:38:14.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.807 [2024-07-12 13:38:14.567940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.807 [2024-07-12 13:38:14.567953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.807 [2024-07-12 13:38:14.567967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.807 [2024-07-12 13:38:14.568011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.807 [2024-07-12 13:38:14.568042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1233850 (9): Bad file descriptor 00:30:21.807 [2024-07-12 13:38:14.572198] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:21.807 Running I/O for 1 seconds... 00:30:21.807 00:30:21.807 Latency(us) 00:30:21.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.807 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:21.807 Verification LBA range: start 0x0 length 0x4000 00:30:21.807 NVMe0n1 : 1.01 8727.62 34.09 0.00 0.00 14604.24 2839.89 12913.02 00:30:21.807 =================================================================================================================== 00:30:21.807 Total : 8727.62 34.09 0.00 0.00 14604.24 2839.89 12913.02 00:30:21.807 13:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.807 13:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:21.807 13:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.064 13:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:22.064 13:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:22.322 13:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:22.580 13:38:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:25.858 13:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:25.858 13:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3692155 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3692155 ']' 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3692155 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3692155 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3692155' 00:30:25.858 killing process with pid 3692155 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3692155 00:30:25.858 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3692155 00:30:26.114 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:26.114 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.371 rmmod nvme_tcp 00:30:26.371 rmmod nvme_fabrics 00:30:26.371 rmmod nvme_keyring 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3690005 ']' 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3690005 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3690005 ']' 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3690005 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3690005 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3690005' 00:30:26.371 killing process with pid 3690005 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3690005 00:30:26.371 13:38:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3690005 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.630 13:38:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.161 13:38:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:29.161 00:30:29.161 real 0m35.204s 00:30:29.161 user 2m1.879s 00:30:29.161 sys 0m6.735s 00:30:29.161 13:38:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:29.161 13:38:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.161 ************************************ 00:30:29.161 END TEST nvmf_failover 00:30:29.161 ************************************ 00:30:29.161 13:38:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:29.161 13:38:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:29.161 13:38:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:29.161 13:38:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.161 13:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.161 ************************************ 00:30:29.161 START TEST nvmf_host_discovery 00:30:29.161 ************************************ 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:29.161 * Looking for test storage... 00:30:29.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.161 13:38:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:29.162 13:38:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.064 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:31.065 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:31.065 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:31.065 Found net devices under 0000:09:00.0: cvl_0_0 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:31.065 Found net devices under 0000:09:00.1: cvl_0_1 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:30:31.065 00:30:31.065 --- 10.0.0.2 ping statistics --- 00:30:31.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.065 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:31.065 00:30:31.065 --- 10.0.0.1 ping statistics --- 00:30:31.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.065 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3695537 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3695537 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3695537 ']' 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.065 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.065 [2024-07-12 13:38:28.392191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:30:31.065 [2024-07-12 13:38:28.392279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.065 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.065 [2024-07-12 13:38:28.430989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:31.065 [2024-07-12 13:38:28.457296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.323 [2024-07-12 13:38:28.546049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.323 [2024-07-12 13:38:28.546096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.323 [2024-07-12 13:38:28.546130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.323 [2024-07-12 13:38:28.546142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.323 [2024-07-12 13:38:28.546151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.323 [2024-07-12 13:38:28.546184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 [2024-07-12 13:38:28.688067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 [2024-07-12 13:38:28.696245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 null0 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 null1 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3695558 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3695558 /tmp/host.sock 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3695558 ']' 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:31.323 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.323 13:38:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 [2024-07-12 13:38:28.767037] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:30:31.323 [2024-07-12 13:38:28.767119] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695558 ] 00:30:31.581 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.581 [2024-07-12 13:38:28.800504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:31.581 [2024-07-12 13:38:28.826342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.581 [2024-07-12 13:38:28.911744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:31.581 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.839 [2024-07-12 13:38:29.305864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.839 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:32.097 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:32.098 13:38:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:32.662 [2024-07-12 13:38:30.076522] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:32.662 [2024-07-12 13:38:30.076552] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:32.662 [2024-07-12 13:38:30.076580] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:32.922 [2024-07-12 13:38:30.162885] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:32.922 [2024-07-12 13:38:30.347694] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:32.922 [2024-07-12 13:38:30.347717] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.180 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 [2024-07-12 13:38:30.762006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:33.439 [2024-07-12 13:38:30.762239] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:33.439 [2024-07-12 13:38:30.762287] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.439 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.440 [2024-07-12 13:38:30.890626] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:33.440 13:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:33.697 [2024-07-12 13:38:31.151817] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:33.697 [2024-07-12 13:38:31.151854] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:33.697 [2024-07-12 13:38:31.151863] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.629 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.630 [2024-07-12 13:38:31.982519] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:34.630 [2024-07-12 13:38:31.982550] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:34.630 [2024-07-12 13:38:31.989166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.630 [2024-07-12 13:38:31.989211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.630 [2024-07-12 13:38:31.989228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.630 [2024-07-12 13:38:31.989243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.630 [2024-07-12 13:38:31.989258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.630 [2024-07-12 13:38:31.989272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.630 [2024-07-12 13:38:31.989286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.630 [2024-07-12 13:38:31.989310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.630 [2024-07-12 13:38:31.989334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:31.999175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 13:38:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.630 [2024-07-12 13:38:32.009218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.009441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.009470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.009487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.009510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.009530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.009545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.009560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.630 [2024-07-12 13:38:32.009581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.630 [2024-07-12 13:38:32.019326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.019510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.019538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.019554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.019576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.019607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.019620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.019634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.630 [2024-07-12 13:38:32.019654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:34.630 [2024-07-12 13:38:32.029400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.029604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.029635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.029658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.029682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.029702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.029716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.029730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.630 [2024-07-12 13:38:32.029763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.630 [2024-07-12 13:38:32.039475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.039641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.039669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.039685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.039707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.039728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.039742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.039755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.630 [2024-07-12 13:38:32.039775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.630 [2024-07-12 13:38:32.049551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.049762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.049789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.049805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.049827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.049847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.049861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.049874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.630 [2024-07-12 13:38:32.049907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.630 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.630 [2024-07-12 13:38:32.059621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:34.630 [2024-07-12 13:38:32.059841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.630 [2024-07-12 13:38:32.059867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f76c0 with addr=10.0.0.2, port=4420 00:30:34.630 [2024-07-12 13:38:32.059883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f76c0 is same with the state(5) to be set 00:30:34.630 [2024-07-12 13:38:32.059906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f76c0 (9): Bad file descriptor 00:30:34.630 [2024-07-12 13:38:32.059944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:34.630 [2024-07-12 13:38:32.059963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:34.630 [2024-07-12 13:38:32.059977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:34.631 [2024-07-12 13:38:32.060011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:34.631 [2024-07-12 13:38:32.068953] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:34.631 [2024-07-12 13:38:32.068980] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:34.631 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.888 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:30:34.888 13:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.821 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.822 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.079 13:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.011 [2024-07-12 13:38:34.407465] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:37.011 [2024-07-12 13:38:34.407504] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:37.011 [2024-07-12 13:38:34.407528] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:37.268 [2024-07-12 13:38:34.493816] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:37.527 [2024-07-12 13:38:34.805811] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:37.527 [2024-07-12 13:38:34.805855] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 request: 00:30:37.527 { 00:30:37.527 "name": "nvme", 00:30:37.527 "trtype": "tcp", 00:30:37.527 "traddr": "10.0.0.2", 00:30:37.527 "adrfam": "ipv4", 00:30:37.527 "trsvcid": "8009", 00:30:37.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:37.527 "wait_for_attach": true, 00:30:37.527 "method": "bdev_nvme_start_discovery", 00:30:37.527 "req_id": 1 00:30:37.527 } 00:30:37.527 Got JSON-RPC error response 00:30:37.527 response: 00:30:37.527 { 00:30:37.527 "code": -17, 00:30:37.527 "message": "File exists" 00:30:37.527 } 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 request: 00:30:37.527 { 00:30:37.527 "name": "nvme_second", 00:30:37.527 "trtype": "tcp", 00:30:37.527 "traddr": "10.0.0.2", 00:30:37.527 "adrfam": "ipv4", 00:30:37.527 "trsvcid": "8009", 00:30:37.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:37.527 "wait_for_attach": true, 00:30:37.527 "method": "bdev_nvme_start_discovery", 00:30:37.527 "req_id": 1 00:30:37.527 } 00:30:37.527 Got JSON-RPC error response 00:30:37.527 response: 00:30:37.527 { 00:30:37.527 "code": -17, 00:30:37.527 "message": "File exists" 00:30:37.527 } 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.527 13:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.785 13:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.754 [2024-07-12 13:38:36.017258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.754 [2024-07-12 13:38:36.017334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1511d40 with addr=10.0.0.2, port=8010 00:30:38.754 [2024-07-12 13:38:36.017377] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:38.754 [2024-07-12 13:38:36.017392] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:38.754 [2024-07-12 13:38:36.017406] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:39.687 [2024-07-12 13:38:37.019746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.687 [2024-07-12 13:38:37.019814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1511d40 with addr=10.0.0.2, port=8010 00:30:39.687 [2024-07-12 13:38:37.019844] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:39.687 [2024-07-12 13:38:37.019873] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:39.687 [2024-07-12 13:38:37.019887] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:40.619 [2024-07-12 13:38:38.021909] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:40.619 request: 00:30:40.619 { 00:30:40.619 "name": "nvme_second", 00:30:40.619 "trtype": "tcp", 00:30:40.619 "traddr": "10.0.0.2", 00:30:40.619 "adrfam": "ipv4", 00:30:40.619 "trsvcid": "8010", 00:30:40.619 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:40.619 "wait_for_attach": false, 00:30:40.619 "attach_timeout_ms": 3000, 00:30:40.619 "method": "bdev_nvme_start_discovery", 00:30:40.619 "req_id": 1 00:30:40.619 } 00:30:40.619 Got JSON-RPC error response 00:30:40.619 response: 00:30:40.619 { 00:30:40.619 "code": -110, 00:30:40.619 "message": "Connection timed out" 00:30:40.619 } 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3695558 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:40.619 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:40.619 rmmod nvme_tcp 00:30:40.877 rmmod nvme_fabrics 00:30:40.877 rmmod nvme_keyring 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3695537 ']' 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3695537 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3695537 ']' 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3695537 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3695537 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3695537' 00:30:40.877 killing process with pid 3695537 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3695537 00:30:40.877 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3695537 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.135 13:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.045 13:38:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.045 00:30:43.045 real 0m14.297s 00:30:43.045 user 0m21.334s 00:30:43.045 sys 0m2.857s 00:30:43.045 13:38:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.045 13:38:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:43.045 ************************************ 00:30:43.045 END TEST nvmf_host_discovery 00:30:43.045 ************************************ 00:30:43.045 13:38:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:43.045 13:38:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:43.045 13:38:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:43.045 13:38:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.045 13:38:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.045 ************************************ 00:30:43.045 START TEST nvmf_host_multipath_status 00:30:43.045 ************************************ 00:30:43.045 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:43.304 * Looking for test storage... 00:30:43.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:43.304 13:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.204 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:45.205 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:45.205 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:45.205 Found net devices under 0000:09:00.0: cvl_0_0 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:45.205 Found net devices under 0000:09:00.1: cvl_0_1 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.205 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.463 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.463 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:30:45.464 00:30:45.464 --- 10.0.0.2 ping statistics --- 00:30:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.464 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:45.464 00:30:45.464 --- 10.0.0.1 ping statistics --- 00:30:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.464 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3698729 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3698729 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3698729 ']' 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.464 13:38:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.464 [2024-07-12 13:38:42.827232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:30:45.464 [2024-07-12 13:38:42.827329] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.464 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.464 [2024-07-12 13:38:42.869365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:45.464 [2024-07-12 13:38:42.896786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:45.724 [2024-07-12 13:38:42.982372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.724 [2024-07-12 13:38:42.982423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.724 [2024-07-12 13:38:42.982457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.724 [2024-07-12 13:38:42.982470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.724 [2024-07-12 13:38:42.982480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.724 [2024-07-12 13:38:42.982543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.724 [2024-07-12 13:38:42.982548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3698729 00:30:45.724 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.982 [2024-07-12 13:38:43.393883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.982 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:46.548 Malloc0 00:30:46.548 13:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:46.548 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.806 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.064 [2024-07-12 13:38:44.471180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.064 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.322 [2024-07-12 13:38:44.711816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3699010 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3699010 /var/tmp/bdevperf.sock 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3699010 ']' 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.322 13:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:47.580 13:38:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.580 13:38:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:47.580 13:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:47.838 13:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:48.404 Nvme0n1 00:30:48.404 13:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:48.662 Nvme0n1 00:30:48.921 13:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:48.921 13:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:50.822 13:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:50.822 13:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:51.081 13:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:51.339 13:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:52.274 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:52.274 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:52.274 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.274 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.533 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.533 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:52.533 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.533 13:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:52.791 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.791 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:52.791 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.791 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.051 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.051 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.051 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.051 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.310 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.310 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:53.310 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.310 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.568 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.568 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.568 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.568 13:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:53.826 13:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.826 13:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:53.826 13:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.084 13:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:54.342 13:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:55.306 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:55.306 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:55.306 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.306 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:55.564 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.564 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:55.564 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.564 13:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:55.821 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.821 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:55.821 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.821 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.079 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.080 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.080 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.080 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.338 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.338 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:56.338 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.338 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:56.596 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.596 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:56.596 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.596 13:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:56.854 13:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.854 13:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:56.854 13:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.125 13:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:57.382 13:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:58.313 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:58.313 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:58.313 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.313 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:58.571 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.571 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:58.571 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.571 13:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:58.839 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.839 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:58.839 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.839 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.097 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.097 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.097 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.097 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.354 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.354 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.354 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.354 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:59.611 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.611 13:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:59.611 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.611 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:59.868 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.868 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:59.868 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.125 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:00.383 13:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:01.315 13:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:01.315 13:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:01.315 13:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.315 13:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:01.573 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.573 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:01.573 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.573 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:01.831 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:01.831 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:01.831 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.831 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.089 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.089 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.089 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.089 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.348 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.348 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.348 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.348 13:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:02.606 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.606 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:02.606 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.606 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:02.863 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.863 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:02.863 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:03.121 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:03.379 13:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:04.748 13:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:04.748 13:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:04.748 13:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.748 13:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:04.748 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:04.748 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:04.748 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.748 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.005 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.005 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.005 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.005 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.263 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.263 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.263 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.263 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.520 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.520 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:05.520 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.520 13:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:05.777 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.777 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:05.777 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.777 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.034 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.034 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:06.034 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:06.291 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:06.547 13:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:07.480 13:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:07.480 13:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:07.480 13:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.480 13:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:07.737 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:07.737 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:07.737 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.737 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:07.995 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.995 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:07.995 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.995 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:08.253 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.253 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:08.253 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.253 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:08.510 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.510 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:08.510 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.510 13:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:08.768 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.768 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:08.768 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.768 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.026 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.026 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:09.290 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:09.290 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:09.587 13:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:09.863 13:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:10.796 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:10.796 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:10.796 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.796 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.054 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.054 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.054 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.054 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.312 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.312 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.312 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.312 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.570 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.570 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.570 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.570 13:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:11.828 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.828 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:11.828 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.828 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.086 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.086 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.086 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.086 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.365 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.365 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:12.366 13:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:12.624 13:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:12.882 13:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.256 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.514 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.514 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.514 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.514 13:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.772 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.772 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.772 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.772 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.030 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.030 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.030 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.030 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.288 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.288 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.288 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.288 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.546 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.546 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:15.546 13:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:15.803 13:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:16.061 13:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:16.995 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:16.995 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:16.995 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.995 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.259 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.259 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:17.259 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.259 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.521 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.521 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.521 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.521 13:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.779 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.779 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.779 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.779 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.037 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.037 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:18.037 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.037 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.295 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.295 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:18.295 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.295 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.553 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.553 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:18.553 13:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:18.811 13:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:19.070 13:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:20.004 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:20.004 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:20.004 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.004 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.262 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.262 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:20.262 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.262 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.520 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.520 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.520 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.520 13:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:20.778 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.778 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:20.778 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.778 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.034 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.034 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:21.034 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.034 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.291 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.291 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:21.291 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.291 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3699010 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3699010 ']' 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3699010 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3699010 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3699010' 00:31:21.548 killing process with pid 3699010 00:31:21.548 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3699010 00:31:21.549 13:39:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3699010 00:31:21.549 Connection closed with partial response: 00:31:21.549 00:31:21.549 00:31:21.810 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3699010 00:31:21.810 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:21.810 [2024-07-12 13:38:44.768873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:31:21.810 [2024-07-12 13:38:44.768958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699010 ] 00:31:21.810 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.810 [2024-07-12 13:38:44.801511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:21.810 [2024-07-12 13:38:44.829020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.810 [2024-07-12 13:38:44.913928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.810 Running I/O for 90 seconds... 00:31:21.810 [2024-07-12 13:39:00.563754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.563812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.563890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.563911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.563935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.563952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.563975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.564970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.564993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.810 [2024-07-12 13:39:00.565562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:21.810 [2024-07-12 13:39:00.565585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.565970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.565987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.566973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.811 [2024-07-12 13:39:00.567503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:21.811 [2024-07-12 13:39:00.567530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.567962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.567982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.812 [2024-07-12 13:39:00.568562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.812 [2024-07-12 13:39:00.568620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.568933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.568949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.812 [2024-07-12 13:39:00.569583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:21.812 [2024-07-12 13:39:00.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:00.569875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.569925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.569956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.570001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.570018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.570047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.570064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.570092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.570109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.570138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.570154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:00.570184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:00.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.293860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.293918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.294001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.294022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.294899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.294924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.294953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.294971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.294994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.295011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.813 [2024-07-12 13:39:16.295182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.295979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.295995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.296017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.296033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.296056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.296072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.296094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.296111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.296133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.296148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:21.813 [2024-07-12 13:39:16.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.813 [2024-07-12 13:39:16.296187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.296818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.296974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.297264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.297303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.297352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.814 [2024-07-12 13:39:16.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:21.814 [2024-07-12 13:39:16.297506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:21.814 [2024-07-12 13:39:16.297522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:21.814 Received shutdown signal, test time was about 32.566473 seconds 00:31:21.814 00:31:21.814 Latency(us) 00:31:21.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.814 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:21.814 Verification LBA range: start 0x0 length 0x4000 00:31:21.814 Nvme0n1 : 32.57 7933.80 30.99 0.00 0.00 16107.30 254.86 4026531.84 00:31:21.814 =================================================================================================================== 00:31:21.814 Total : 7933.80 30.99 0.00 0.00 16107.30 254.86 4026531.84 00:31:21.814 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.073 rmmod nvme_tcp 00:31:22.073 rmmod nvme_fabrics 00:31:22.073 rmmod nvme_keyring 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3698729 ']' 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3698729 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3698729 ']' 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3698729 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3698729 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3698729' 00:31:22.073 killing process with pid 3698729 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3698729 00:31:22.073 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3698729 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.331 13:39:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.885 13:39:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:24.885 00:31:24.885 real 0m41.268s 00:31:24.885 user 2m4.568s 00:31:24.885 sys 0m10.395s 00:31:24.885 13:39:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:24.885 13:39:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.885 ************************************ 00:31:24.885 END TEST nvmf_host_multipath_status 00:31:24.885 ************************************ 00:31:24.885 13:39:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:24.885 13:39:21 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:24.885 13:39:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:24.885 13:39:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.885 13:39:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.885 ************************************ 00:31:24.885 START TEST nvmf_discovery_remove_ifc 00:31:24.885 ************************************ 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:24.885 * Looking for test storage... 00:31:24.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:24.885 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:24.886 13:39:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:26.794 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:26.794 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:26.794 Found net devices under 0000:09:00.0: cvl_0_0 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.794 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:26.795 Found net devices under 0000:09:00.1: cvl_0_1 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.795 13:39:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:26.795 00:31:26.795 --- 10.0.0.2 ping statistics --- 00:31:26.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.795 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:31:26.795 00:31:26.795 --- 10.0.0.1 ping statistics --- 00:31:26.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.795 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3705814 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3705814 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3705814 ']' 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.795 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.795 [2024-07-12 13:39:24.134764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:31:26.795 [2024-07-12 13:39:24.134844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.795 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.795 [2024-07-12 13:39:24.171912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:26.795 [2024-07-12 13:39:24.197889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.053 [2024-07-12 13:39:24.281346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.053 [2024-07-12 13:39:24.281390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.053 [2024-07-12 13:39:24.281418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.053 [2024-07-12 13:39:24.281431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.054 [2024-07-12 13:39:24.281441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.054 [2024-07-12 13:39:24.281467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.054 [2024-07-12 13:39:24.424910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.054 [2024-07-12 13:39:24.433074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:27.054 null0 00:31:27.054 [2024-07-12 13:39:24.465038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3705844 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3705844 /tmp/host.sock 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3705844 ']' 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:27.054 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.054 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.312 [2024-07-12 13:39:24.531912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:31:27.312 [2024-07-12 13:39:24.531983] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705844 ] 00:31:27.312 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.312 [2024-07-12 13:39:24.566933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:27.312 [2024-07-12 13:39:24.595639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.312 [2024-07-12 13:39:24.682200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.312 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.570 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.570 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:27.570 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.570 13:39:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.502 [2024-07-12 13:39:25.913154] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:28.502 [2024-07-12 13:39:25.913183] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:28.502 [2024-07-12 13:39:25.913204] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:28.761 [2024-07-12 13:39:26.041646] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:28.761 [2024-07-12 13:39:26.104132] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:28.761 [2024-07-12 13:39:26.104190] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:28.761 [2024-07-12 13:39:26.104228] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:28.761 [2024-07-12 13:39:26.104252] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:28.761 [2024-07-12 13:39:26.104288] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.761 [2024-07-12 13:39:26.111193] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdb6370 was disconnected and freed. delete nvme_qpair. 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.761 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.019 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.019 13:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.951 13:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.883 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.884 13:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.250 13:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:33.180 13:39:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:34.114 13:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.114 [2024-07-12 13:39:31.545283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:34.114 [2024-07-12 13:39:31.545376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:34.114 [2024-07-12 13:39:31.545399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.114 [2024-07-12 13:39:31.545416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:34.114 [2024-07-12 13:39:31.545429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.114 [2024-07-12 13:39:31.545443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:34.114 [2024-07-12 13:39:31.545456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.114 [2024-07-12 13:39:31.545470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:34.114 [2024-07-12 13:39:31.545489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.114 [2024-07-12 13:39:31.545503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:34.114 [2024-07-12 13:39:31.545516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.114 [2024-07-12 13:39:31.545528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7cd50 is same with the state(5) to be set 00:31:34.114 [2024-07-12 13:39:31.555320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7cd50 (9): Bad file descriptor 00:31:34.114 [2024-07-12 13:39:31.565346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.047 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.305 [2024-07-12 13:39:32.571418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:35.305 [2024-07-12 13:39:32.571489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7cd50 with addr=10.0.0.2, port=4420 00:31:35.305 [2024-07-12 13:39:32.571514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7cd50 is same with the state(5) to be set 00:31:35.305 [2024-07-12 13:39:32.571558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7cd50 (9): Bad file descriptor 00:31:35.305 [2024-07-12 13:39:32.572021] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:35.305 [2024-07-12 13:39:32.572052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.305 [2024-07-12 13:39:32.572067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:35.305 [2024-07-12 13:39:32.572082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.305 [2024-07-12 13:39:32.572110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:35.305 [2024-07-12 13:39:32.572127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:35.305 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.305 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:35.305 13:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.237 [2024-07-12 13:39:33.574653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:36.237 [2024-07-12 13:39:33.574725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:36.237 [2024-07-12 13:39:33.574755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:36.237 [2024-07-12 13:39:33.574769] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:36.237 [2024-07-12 13:39:33.574802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.237 [2024-07-12 13:39:33.574850] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:36.237 [2024-07-12 13:39:33.574910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.237 [2024-07-12 13:39:33.574938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.237 [2024-07-12 13:39:33.574958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.237 [2024-07-12 13:39:33.574971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.237 [2024-07-12 13:39:33.574985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.237 [2024-07-12 13:39:33.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.237 [2024-07-12 13:39:33.575011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.237 [2024-07-12 13:39:33.575024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.237 [2024-07-12 13:39:33.575037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.237 [2024-07-12 13:39:33.575050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.237 [2024-07-12 13:39:33.575063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:36.237 [2024-07-12 13:39:33.575118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7c210 (9): Bad file descriptor 00:31:36.237 [2024-07-12 13:39:33.576102] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:36.237 [2024-07-12 13:39:33.576123] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.237 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.494 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:36.494 13:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:37.427 13:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:38.361 [2024-07-12 13:39:35.588213] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:38.361 [2024-07-12 13:39:35.588251] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:38.361 [2024-07-12 13:39:35.588273] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:38.361 [2024-07-12 13:39:35.715728] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.361 [2024-07-12 13:39:35.778539] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:38.361 [2024-07-12 13:39:35.778587] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:38.361 [2024-07-12 13:39:35.778634] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:38.361 [2024-07-12 13:39:35.778672] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:38.361 [2024-07-12 13:39:35.778685] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:38.361 [2024-07-12 13:39:35.786597] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd8a550 was disconnected and freed. delete nvme_qpair. 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:38.361 13:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3705844 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3705844 ']' 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3705844 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705844 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705844' 00:31:39.734 killing process with pid 3705844 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3705844 00:31:39.734 13:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3705844 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:39.734 rmmod nvme_tcp 00:31:39.734 rmmod nvme_fabrics 00:31:39.734 rmmod nvme_keyring 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3705814 ']' 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3705814 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3705814 ']' 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3705814 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705814 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705814' 00:31:39.734 killing process with pid 3705814 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3705814 00:31:39.734 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3705814 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.992 13:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.558 13:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:42.558 00:31:42.558 real 0m17.644s 00:31:42.558 user 0m25.467s 00:31:42.558 sys 0m3.096s 00:31:42.558 13:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.558 13:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.558 ************************************ 00:31:42.558 END TEST nvmf_discovery_remove_ifc 00:31:42.558 ************************************ 00:31:42.558 13:39:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:42.558 13:39:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:42.558 13:39:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:42.558 13:39:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.558 13:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:42.558 ************************************ 00:31:42.558 START TEST nvmf_identify_kernel_target 00:31:42.558 ************************************ 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:42.558 * Looking for test storage... 00:31:42.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:42.558 13:39:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:44.460 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:44.460 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:44.460 Found net devices under 0000:09:00.0: cvl_0_0 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:44.460 Found net devices under 0000:09:00.1: cvl_0_1 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:44.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:31:44.460 00:31:44.460 --- 10.0.0.2 ping statistics --- 00:31:44.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.460 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:31:44.460 00:31:44.460 --- 10.0.0.1 ping statistics --- 00:31:44.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.460 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:44.460 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:44.461 13:39:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:45.392 Waiting for block devices as requested 00:31:45.651 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:45.651 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:45.651 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:45.907 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:45.907 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:45.907 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:45.907 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:46.164 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:46.164 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:46.421 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:46.421 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:46.421 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:46.421 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:46.678 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:46.678 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:46.678 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:46.678 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:46.936 No valid GPT data, bailing 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:46.936 00:31:46.936 Discovery Log Number of Records 2, Generation counter 2 00:31:46.936 =====Discovery Log Entry 0====== 00:31:46.936 trtype: tcp 00:31:46.936 adrfam: ipv4 00:31:46.936 subtype: current discovery subsystem 00:31:46.936 treq: not specified, sq flow control disable supported 00:31:46.936 portid: 1 00:31:46.936 trsvcid: 4420 00:31:46.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:46.936 traddr: 10.0.0.1 00:31:46.936 eflags: none 00:31:46.936 sectype: none 00:31:46.936 =====Discovery Log Entry 1====== 00:31:46.936 trtype: tcp 00:31:46.936 adrfam: ipv4 00:31:46.936 subtype: nvme subsystem 00:31:46.936 treq: not specified, sq flow control disable supported 00:31:46.936 portid: 1 00:31:46.936 trsvcid: 4420 00:31:46.936 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:46.936 traddr: 10.0.0.1 00:31:46.936 eflags: none 00:31:46.936 sectype: none 00:31:46.936 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:46.936 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:47.197 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.197 ===================================================== 00:31:47.197 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:47.197 ===================================================== 00:31:47.197 Controller Capabilities/Features 00:31:47.197 ================================ 00:31:47.197 Vendor ID: 0000 00:31:47.197 Subsystem Vendor ID: 0000 00:31:47.197 Serial Number: 680a5f2903b03299a3d9 00:31:47.197 Model Number: Linux 00:31:47.197 Firmware Version: 6.7.0-68 00:31:47.197 Recommended Arb Burst: 0 00:31:47.197 IEEE OUI Identifier: 00 00 00 00:31:47.197 Multi-path I/O 00:31:47.197 May have multiple subsystem ports: No 00:31:47.197 May have multiple controllers: No 00:31:47.197 Associated with SR-IOV VF: No 00:31:47.197 Max Data Transfer Size: Unlimited 00:31:47.197 Max Number of Namespaces: 0 00:31:47.197 Max Number of I/O Queues: 1024 00:31:47.197 NVMe Specification Version (VS): 1.3 00:31:47.197 NVMe Specification Version (Identify): 1.3 00:31:47.197 Maximum Queue Entries: 1024 00:31:47.197 Contiguous Queues Required: No 00:31:47.197 Arbitration Mechanisms Supported 00:31:47.197 Weighted Round Robin: Not Supported 00:31:47.197 Vendor Specific: Not Supported 00:31:47.197 Reset Timeout: 7500 ms 00:31:47.197 Doorbell Stride: 4 bytes 00:31:47.197 NVM Subsystem Reset: Not Supported 00:31:47.197 Command Sets Supported 00:31:47.197 NVM Command Set: Supported 00:31:47.197 Boot Partition: Not Supported 00:31:47.197 Memory Page Size Minimum: 4096 bytes 00:31:47.197 Memory Page Size Maximum: 4096 bytes 00:31:47.197 Persistent Memory Region: Not Supported 00:31:47.197 Optional Asynchronous Events Supported 00:31:47.197 Namespace Attribute Notices: Not Supported 00:31:47.197 Firmware Activation Notices: Not Supported 00:31:47.197 ANA Change Notices: Not Supported 00:31:47.197 PLE Aggregate Log Change Notices: Not Supported 00:31:47.197 LBA Status Info Alert Notices: Not Supported 00:31:47.197 EGE Aggregate Log Change Notices: Not Supported 00:31:47.197 Normal NVM Subsystem Shutdown event: Not Supported 00:31:47.197 Zone Descriptor Change Notices: Not Supported 00:31:47.197 Discovery Log Change Notices: Supported 00:31:47.197 Controller Attributes 00:31:47.197 128-bit Host Identifier: Not Supported 00:31:47.197 Non-Operational Permissive Mode: Not Supported 00:31:47.197 NVM Sets: Not Supported 00:31:47.197 Read Recovery Levels: Not Supported 00:31:47.197 Endurance Groups: Not Supported 00:31:47.197 Predictable Latency Mode: Not Supported 00:31:47.197 Traffic Based Keep ALive: Not Supported 00:31:47.197 Namespace Granularity: Not Supported 00:31:47.197 SQ Associations: Not Supported 00:31:47.197 UUID List: Not Supported 00:31:47.197 Multi-Domain Subsystem: Not Supported 00:31:47.197 Fixed Capacity Management: Not Supported 00:31:47.197 Variable Capacity Management: Not Supported 00:31:47.197 Delete Endurance Group: Not Supported 00:31:47.197 Delete NVM Set: Not Supported 00:31:47.197 Extended LBA Formats Supported: Not Supported 00:31:47.197 Flexible Data Placement Supported: Not Supported 00:31:47.197 00:31:47.197 Controller Memory Buffer Support 00:31:47.197 ================================ 00:31:47.197 Supported: No 00:31:47.197 00:31:47.197 Persistent Memory Region Support 00:31:47.197 ================================ 00:31:47.197 Supported: No 00:31:47.197 00:31:47.197 Admin Command Set Attributes 00:31:47.197 ============================ 00:31:47.197 Security Send/Receive: Not Supported 00:31:47.197 Format NVM: Not Supported 00:31:47.197 Firmware Activate/Download: Not Supported 00:31:47.197 Namespace Management: Not Supported 00:31:47.197 Device Self-Test: Not Supported 00:31:47.197 Directives: Not Supported 00:31:47.197 NVMe-MI: Not Supported 00:31:47.197 Virtualization Management: Not Supported 00:31:47.197 Doorbell Buffer Config: Not Supported 00:31:47.197 Get LBA Status Capability: Not Supported 00:31:47.197 Command & Feature Lockdown Capability: Not Supported 00:31:47.197 Abort Command Limit: 1 00:31:47.197 Async Event Request Limit: 1 00:31:47.197 Number of Firmware Slots: N/A 00:31:47.197 Firmware Slot 1 Read-Only: N/A 00:31:47.197 Firmware Activation Without Reset: N/A 00:31:47.197 Multiple Update Detection Support: N/A 00:31:47.197 Firmware Update Granularity: No Information Provided 00:31:47.197 Per-Namespace SMART Log: No 00:31:47.197 Asymmetric Namespace Access Log Page: Not Supported 00:31:47.197 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:47.197 Command Effects Log Page: Not Supported 00:31:47.197 Get Log Page Extended Data: Supported 00:31:47.197 Telemetry Log Pages: Not Supported 00:31:47.197 Persistent Event Log Pages: Not Supported 00:31:47.197 Supported Log Pages Log Page: May Support 00:31:47.197 Commands Supported & Effects Log Page: Not Supported 00:31:47.197 Feature Identifiers & Effects Log Page:May Support 00:31:47.197 NVMe-MI Commands & Effects Log Page: May Support 00:31:47.197 Data Area 4 for Telemetry Log: Not Supported 00:31:47.197 Error Log Page Entries Supported: 1 00:31:47.198 Keep Alive: Not Supported 00:31:47.198 00:31:47.198 NVM Command Set Attributes 00:31:47.198 ========================== 00:31:47.198 Submission Queue Entry Size 00:31:47.198 Max: 1 00:31:47.198 Min: 1 00:31:47.198 Completion Queue Entry Size 00:31:47.198 Max: 1 00:31:47.198 Min: 1 00:31:47.198 Number of Namespaces: 0 00:31:47.198 Compare Command: Not Supported 00:31:47.198 Write Uncorrectable Command: Not Supported 00:31:47.198 Dataset Management Command: Not Supported 00:31:47.198 Write Zeroes Command: Not Supported 00:31:47.198 Set Features Save Field: Not Supported 00:31:47.198 Reservations: Not Supported 00:31:47.198 Timestamp: Not Supported 00:31:47.198 Copy: Not Supported 00:31:47.198 Volatile Write Cache: Not Present 00:31:47.198 Atomic Write Unit (Normal): 1 00:31:47.198 Atomic Write Unit (PFail): 1 00:31:47.198 Atomic Compare & Write Unit: 1 00:31:47.198 Fused Compare & Write: Not Supported 00:31:47.198 Scatter-Gather List 00:31:47.198 SGL Command Set: Supported 00:31:47.198 SGL Keyed: Not Supported 00:31:47.198 SGL Bit Bucket Descriptor: Not Supported 00:31:47.198 SGL Metadata Pointer: Not Supported 00:31:47.198 Oversized SGL: Not Supported 00:31:47.198 SGL Metadata Address: Not Supported 00:31:47.198 SGL Offset: Supported 00:31:47.198 Transport SGL Data Block: Not Supported 00:31:47.198 Replay Protected Memory Block: Not Supported 00:31:47.198 00:31:47.198 Firmware Slot Information 00:31:47.198 ========================= 00:31:47.198 Active slot: 0 00:31:47.198 00:31:47.198 00:31:47.198 Error Log 00:31:47.198 ========= 00:31:47.198 00:31:47.198 Active Namespaces 00:31:47.198 ================= 00:31:47.198 Discovery Log Page 00:31:47.198 ================== 00:31:47.198 Generation Counter: 2 00:31:47.198 Number of Records: 2 00:31:47.198 Record Format: 0 00:31:47.198 00:31:47.198 Discovery Log Entry 0 00:31:47.198 ---------------------- 00:31:47.198 Transport Type: 3 (TCP) 00:31:47.198 Address Family: 1 (IPv4) 00:31:47.198 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:47.198 Entry Flags: 00:31:47.198 Duplicate Returned Information: 0 00:31:47.198 Explicit Persistent Connection Support for Discovery: 0 00:31:47.198 Transport Requirements: 00:31:47.198 Secure Channel: Not Specified 00:31:47.198 Port ID: 1 (0x0001) 00:31:47.198 Controller ID: 65535 (0xffff) 00:31:47.198 Admin Max SQ Size: 32 00:31:47.198 Transport Service Identifier: 4420 00:31:47.198 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:47.198 Transport Address: 10.0.0.1 00:31:47.198 Discovery Log Entry 1 00:31:47.198 ---------------------- 00:31:47.198 Transport Type: 3 (TCP) 00:31:47.198 Address Family: 1 (IPv4) 00:31:47.198 Subsystem Type: 2 (NVM Subsystem) 00:31:47.198 Entry Flags: 00:31:47.198 Duplicate Returned Information: 0 00:31:47.198 Explicit Persistent Connection Support for Discovery: 0 00:31:47.198 Transport Requirements: 00:31:47.198 Secure Channel: Not Specified 00:31:47.198 Port ID: 1 (0x0001) 00:31:47.198 Controller ID: 65535 (0xffff) 00:31:47.198 Admin Max SQ Size: 32 00:31:47.198 Transport Service Identifier: 4420 00:31:47.198 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:47.198 Transport Address: 10.0.0.1 00:31:47.198 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.198 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.198 get_feature(0x01) failed 00:31:47.198 get_feature(0x02) failed 00:31:47.198 get_feature(0x04) failed 00:31:47.198 ===================================================== 00:31:47.198 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.198 ===================================================== 00:31:47.198 Controller Capabilities/Features 00:31:47.198 ================================ 00:31:47.198 Vendor ID: 0000 00:31:47.198 Subsystem Vendor ID: 0000 00:31:47.198 Serial Number: c509f3e6bdaa564770f2 00:31:47.198 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:47.198 Firmware Version: 6.7.0-68 00:31:47.198 Recommended Arb Burst: 6 00:31:47.198 IEEE OUI Identifier: 00 00 00 00:31:47.198 Multi-path I/O 00:31:47.198 May have multiple subsystem ports: Yes 00:31:47.198 May have multiple controllers: Yes 00:31:47.198 Associated with SR-IOV VF: No 00:31:47.198 Max Data Transfer Size: Unlimited 00:31:47.198 Max Number of Namespaces: 1024 00:31:47.198 Max Number of I/O Queues: 128 00:31:47.198 NVMe Specification Version (VS): 1.3 00:31:47.198 NVMe Specification Version (Identify): 1.3 00:31:47.198 Maximum Queue Entries: 1024 00:31:47.198 Contiguous Queues Required: No 00:31:47.198 Arbitration Mechanisms Supported 00:31:47.198 Weighted Round Robin: Not Supported 00:31:47.198 Vendor Specific: Not Supported 00:31:47.198 Reset Timeout: 7500 ms 00:31:47.198 Doorbell Stride: 4 bytes 00:31:47.198 NVM Subsystem Reset: Not Supported 00:31:47.198 Command Sets Supported 00:31:47.198 NVM Command Set: Supported 00:31:47.198 Boot Partition: Not Supported 00:31:47.198 Memory Page Size Minimum: 4096 bytes 00:31:47.198 Memory Page Size Maximum: 4096 bytes 00:31:47.198 Persistent Memory Region: Not Supported 00:31:47.198 Optional Asynchronous Events Supported 00:31:47.198 Namespace Attribute Notices: Supported 00:31:47.198 Firmware Activation Notices: Not Supported 00:31:47.198 ANA Change Notices: Supported 00:31:47.198 PLE Aggregate Log Change Notices: Not Supported 00:31:47.198 LBA Status Info Alert Notices: Not Supported 00:31:47.198 EGE Aggregate Log Change Notices: Not Supported 00:31:47.198 Normal NVM Subsystem Shutdown event: Not Supported 00:31:47.198 Zone Descriptor Change Notices: Not Supported 00:31:47.198 Discovery Log Change Notices: Not Supported 00:31:47.198 Controller Attributes 00:31:47.198 128-bit Host Identifier: Supported 00:31:47.198 Non-Operational Permissive Mode: Not Supported 00:31:47.198 NVM Sets: Not Supported 00:31:47.198 Read Recovery Levels: Not Supported 00:31:47.198 Endurance Groups: Not Supported 00:31:47.198 Predictable Latency Mode: Not Supported 00:31:47.198 Traffic Based Keep ALive: Supported 00:31:47.198 Namespace Granularity: Not Supported 00:31:47.198 SQ Associations: Not Supported 00:31:47.198 UUID List: Not Supported 00:31:47.198 Multi-Domain Subsystem: Not Supported 00:31:47.198 Fixed Capacity Management: Not Supported 00:31:47.198 Variable Capacity Management: Not Supported 00:31:47.198 Delete Endurance Group: Not Supported 00:31:47.198 Delete NVM Set: Not Supported 00:31:47.198 Extended LBA Formats Supported: Not Supported 00:31:47.198 Flexible Data Placement Supported: Not Supported 00:31:47.198 00:31:47.198 Controller Memory Buffer Support 00:31:47.198 ================================ 00:31:47.198 Supported: No 00:31:47.198 00:31:47.198 Persistent Memory Region Support 00:31:47.198 ================================ 00:31:47.198 Supported: No 00:31:47.198 00:31:47.198 Admin Command Set Attributes 00:31:47.198 ============================ 00:31:47.198 Security Send/Receive: Not Supported 00:31:47.198 Format NVM: Not Supported 00:31:47.198 Firmware Activate/Download: Not Supported 00:31:47.198 Namespace Management: Not Supported 00:31:47.198 Device Self-Test: Not Supported 00:31:47.198 Directives: Not Supported 00:31:47.198 NVMe-MI: Not Supported 00:31:47.198 Virtualization Management: Not Supported 00:31:47.198 Doorbell Buffer Config: Not Supported 00:31:47.198 Get LBA Status Capability: Not Supported 00:31:47.198 Command & Feature Lockdown Capability: Not Supported 00:31:47.198 Abort Command Limit: 4 00:31:47.198 Async Event Request Limit: 4 00:31:47.198 Number of Firmware Slots: N/A 00:31:47.198 Firmware Slot 1 Read-Only: N/A 00:31:47.198 Firmware Activation Without Reset: N/A 00:31:47.198 Multiple Update Detection Support: N/A 00:31:47.198 Firmware Update Granularity: No Information Provided 00:31:47.198 Per-Namespace SMART Log: Yes 00:31:47.198 Asymmetric Namespace Access Log Page: Supported 00:31:47.198 ANA Transition Time : 10 sec 00:31:47.198 00:31:47.198 Asymmetric Namespace Access Capabilities 00:31:47.198 ANA Optimized State : Supported 00:31:47.198 ANA Non-Optimized State : Supported 00:31:47.198 ANA Inaccessible State : Supported 00:31:47.198 ANA Persistent Loss State : Supported 00:31:47.198 ANA Change State : Supported 00:31:47.198 ANAGRPID is not changed : No 00:31:47.198 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:47.198 00:31:47.198 ANA Group Identifier Maximum : 128 00:31:47.198 Number of ANA Group Identifiers : 128 00:31:47.198 Max Number of Allowed Namespaces : 1024 00:31:47.198 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:47.198 Command Effects Log Page: Supported 00:31:47.198 Get Log Page Extended Data: Supported 00:31:47.198 Telemetry Log Pages: Not Supported 00:31:47.198 Persistent Event Log Pages: Not Supported 00:31:47.199 Supported Log Pages Log Page: May Support 00:31:47.199 Commands Supported & Effects Log Page: Not Supported 00:31:47.199 Feature Identifiers & Effects Log Page:May Support 00:31:47.199 NVMe-MI Commands & Effects Log Page: May Support 00:31:47.199 Data Area 4 for Telemetry Log: Not Supported 00:31:47.199 Error Log Page Entries Supported: 128 00:31:47.199 Keep Alive: Supported 00:31:47.199 Keep Alive Granularity: 1000 ms 00:31:47.199 00:31:47.199 NVM Command Set Attributes 00:31:47.199 ========================== 00:31:47.199 Submission Queue Entry Size 00:31:47.199 Max: 64 00:31:47.199 Min: 64 00:31:47.199 Completion Queue Entry Size 00:31:47.199 Max: 16 00:31:47.199 Min: 16 00:31:47.199 Number of Namespaces: 1024 00:31:47.199 Compare Command: Not Supported 00:31:47.199 Write Uncorrectable Command: Not Supported 00:31:47.199 Dataset Management Command: Supported 00:31:47.199 Write Zeroes Command: Supported 00:31:47.199 Set Features Save Field: Not Supported 00:31:47.199 Reservations: Not Supported 00:31:47.199 Timestamp: Not Supported 00:31:47.199 Copy: Not Supported 00:31:47.199 Volatile Write Cache: Present 00:31:47.199 Atomic Write Unit (Normal): 1 00:31:47.199 Atomic Write Unit (PFail): 1 00:31:47.199 Atomic Compare & Write Unit: 1 00:31:47.199 Fused Compare & Write: Not Supported 00:31:47.199 Scatter-Gather List 00:31:47.199 SGL Command Set: Supported 00:31:47.199 SGL Keyed: Not Supported 00:31:47.199 SGL Bit Bucket Descriptor: Not Supported 00:31:47.199 SGL Metadata Pointer: Not Supported 00:31:47.199 Oversized SGL: Not Supported 00:31:47.199 SGL Metadata Address: Not Supported 00:31:47.199 SGL Offset: Supported 00:31:47.199 Transport SGL Data Block: Not Supported 00:31:47.199 Replay Protected Memory Block: Not Supported 00:31:47.199 00:31:47.199 Firmware Slot Information 00:31:47.199 ========================= 00:31:47.199 Active slot: 0 00:31:47.199 00:31:47.199 Asymmetric Namespace Access 00:31:47.199 =========================== 00:31:47.199 Change Count : 0 00:31:47.199 Number of ANA Group Descriptors : 1 00:31:47.199 ANA Group Descriptor : 0 00:31:47.199 ANA Group ID : 1 00:31:47.199 Number of NSID Values : 1 00:31:47.199 Change Count : 0 00:31:47.199 ANA State : 1 00:31:47.199 Namespace Identifier : 1 00:31:47.199 00:31:47.199 Commands Supported and Effects 00:31:47.199 ============================== 00:31:47.199 Admin Commands 00:31:47.199 -------------- 00:31:47.199 Get Log Page (02h): Supported 00:31:47.199 Identify (06h): Supported 00:31:47.199 Abort (08h): Supported 00:31:47.199 Set Features (09h): Supported 00:31:47.199 Get Features (0Ah): Supported 00:31:47.199 Asynchronous Event Request (0Ch): Supported 00:31:47.199 Keep Alive (18h): Supported 00:31:47.199 I/O Commands 00:31:47.199 ------------ 00:31:47.199 Flush (00h): Supported 00:31:47.199 Write (01h): Supported LBA-Change 00:31:47.199 Read (02h): Supported 00:31:47.199 Write Zeroes (08h): Supported LBA-Change 00:31:47.199 Dataset Management (09h): Supported 00:31:47.199 00:31:47.199 Error Log 00:31:47.199 ========= 00:31:47.199 Entry: 0 00:31:47.199 Error Count: 0x3 00:31:47.199 Submission Queue Id: 0x0 00:31:47.199 Command Id: 0x5 00:31:47.199 Phase Bit: 0 00:31:47.199 Status Code: 0x2 00:31:47.199 Status Code Type: 0x0 00:31:47.199 Do Not Retry: 1 00:31:47.199 Error Location: 0x28 00:31:47.199 LBA: 0x0 00:31:47.199 Namespace: 0x0 00:31:47.199 Vendor Log Page: 0x0 00:31:47.199 ----------- 00:31:47.199 Entry: 1 00:31:47.199 Error Count: 0x2 00:31:47.199 Submission Queue Id: 0x0 00:31:47.199 Command Id: 0x5 00:31:47.199 Phase Bit: 0 00:31:47.199 Status Code: 0x2 00:31:47.199 Status Code Type: 0x0 00:31:47.199 Do Not Retry: 1 00:31:47.199 Error Location: 0x28 00:31:47.199 LBA: 0x0 00:31:47.199 Namespace: 0x0 00:31:47.199 Vendor Log Page: 0x0 00:31:47.199 ----------- 00:31:47.199 Entry: 2 00:31:47.199 Error Count: 0x1 00:31:47.199 Submission Queue Id: 0x0 00:31:47.199 Command Id: 0x4 00:31:47.199 Phase Bit: 0 00:31:47.199 Status Code: 0x2 00:31:47.199 Status Code Type: 0x0 00:31:47.199 Do Not Retry: 1 00:31:47.199 Error Location: 0x28 00:31:47.199 LBA: 0x0 00:31:47.199 Namespace: 0x0 00:31:47.199 Vendor Log Page: 0x0 00:31:47.199 00:31:47.199 Number of Queues 00:31:47.199 ================ 00:31:47.199 Number of I/O Submission Queues: 128 00:31:47.199 Number of I/O Completion Queues: 128 00:31:47.199 00:31:47.199 ZNS Specific Controller Data 00:31:47.199 ============================ 00:31:47.199 Zone Append Size Limit: 0 00:31:47.199 00:31:47.199 00:31:47.199 Active Namespaces 00:31:47.199 ================= 00:31:47.199 get_feature(0x05) failed 00:31:47.199 Namespace ID:1 00:31:47.199 Command Set Identifier: NVM (00h) 00:31:47.199 Deallocate: Supported 00:31:47.199 Deallocated/Unwritten Error: Not Supported 00:31:47.199 Deallocated Read Value: Unknown 00:31:47.199 Deallocate in Write Zeroes: Not Supported 00:31:47.199 Deallocated Guard Field: 0xFFFF 00:31:47.199 Flush: Supported 00:31:47.199 Reservation: Not Supported 00:31:47.199 Namespace Sharing Capabilities: Multiple Controllers 00:31:47.199 Size (in LBAs): 1953525168 (931GiB) 00:31:47.199 Capacity (in LBAs): 1953525168 (931GiB) 00:31:47.199 Utilization (in LBAs): 1953525168 (931GiB) 00:31:47.199 UUID: 8ad1362e-d503-4d2b-bc2e-becffea23954 00:31:47.199 Thin Provisioning: Not Supported 00:31:47.199 Per-NS Atomic Units: Yes 00:31:47.199 Atomic Boundary Size (Normal): 0 00:31:47.199 Atomic Boundary Size (PFail): 0 00:31:47.199 Atomic Boundary Offset: 0 00:31:47.199 NGUID/EUI64 Never Reused: No 00:31:47.199 ANA group ID: 1 00:31:47.199 Namespace Write Protected: No 00:31:47.199 Number of LBA Formats: 1 00:31:47.199 Current LBA Format: LBA Format #00 00:31:47.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:47.199 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:47.199 rmmod nvme_tcp 00:31:47.199 rmmod nvme_fabrics 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.199 13:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:49.737 13:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:50.673 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:50.673 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:50.673 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:51.612 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.871 00:31:51.871 real 0m9.665s 00:31:51.871 user 0m2.048s 00:31:51.871 sys 0m3.511s 00:31:51.871 13:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.871 13:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:51.871 ************************************ 00:31:51.871 END TEST nvmf_identify_kernel_target 00:31:51.871 ************************************ 00:31:51.871 13:39:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:51.871 13:39:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:51.871 13:39:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:51.871 13:39:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.871 13:39:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:51.871 ************************************ 00:31:51.871 START TEST nvmf_auth_host 00:31:51.871 ************************************ 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:51.871 * Looking for test storage... 00:31:51.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:51.871 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:51.872 13:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:54.422 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:54.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:54.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:54.423 Found net devices under 0000:09:00.0: cvl_0_0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:54.423 Found net devices under 0000:09:00.1: cvl_0_1 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:31:54.423 00:31:54.423 --- 10.0.0.2 ping statistics --- 00:31:54.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.423 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:31:54.423 00:31:54.423 --- 10.0.0.1 ping statistics --- 00:31:54.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.423 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3713023 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3713023 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3713023 ']' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:54.423 13:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=51bcc8c6a7649cfe0cc459510429438d 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GNk 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 51bcc8c6a7649cfe0cc459510429438d 0 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 51bcc8c6a7649cfe0cc459510429438d 0 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=51bcc8c6a7649cfe0cc459510429438d 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GNk 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GNk 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GNk 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=073885b45add10f89c971c5296501f650350aa2ec8d892a16773e3ffcbdda550 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZOZ 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 073885b45add10f89c971c5296501f650350aa2ec8d892a16773e3ffcbdda550 3 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 073885b45add10f89c971c5296501f650350aa2ec8d892a16773e3ffcbdda550 3 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=073885b45add10f89c971c5296501f650350aa2ec8d892a16773e3ffcbdda550 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:54.688 13:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZOZ 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZOZ 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ZOZ 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02ed663f27b84c14c663f6ce02be6f2e531336767e738dd3 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.g4h 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02ed663f27b84c14c663f6ce02be6f2e531336767e738dd3 0 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02ed663f27b84c14c663f6ce02be6f2e531336767e738dd3 0 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02ed663f27b84c14c663f6ce02be6f2e531336767e738dd3 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.g4h 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.g4h 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.g4h 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=abb9b839000d49da2683b5f82e7938b4030102efa15a5d5a 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Mce 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key abb9b839000d49da2683b5f82e7938b4030102efa15a5d5a 2 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 abb9b839000d49da2683b5f82e7938b4030102efa15a5d5a 2 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=abb9b839000d49da2683b5f82e7938b4030102efa15a5d5a 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Mce 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Mce 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Mce 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3da24b49d28e790ad8ae0204f29a0e4 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Sn 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3da24b49d28e790ad8ae0204f29a0e4 1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3da24b49d28e790ad8ae0204f29a0e4 1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3da24b49d28e790ad8ae0204f29a0e4 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Sn 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Sn 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6Sn 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:54.688 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab5112851bd9e6eb2590fd263e580d86 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fJg 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab5112851bd9e6eb2590fd263e580d86 1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab5112851bd9e6eb2590fd263e580d86 1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab5112851bd9e6eb2590fd263e580d86 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fJg 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fJg 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fJg 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d81c5f7137737decc6131e6fdc18afda507c9eb3fc73e9d3 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Q8S 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d81c5f7137737decc6131e6fdc18afda507c9eb3fc73e9d3 2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d81c5f7137737decc6131e6fdc18afda507c9eb3fc73e9d3 2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d81c5f7137737decc6131e6fdc18afda507c9eb3fc73e9d3 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Q8S 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Q8S 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Q8S 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de99e2f17eac69da83ac10b3b2bf38e0 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ti2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de99e2f17eac69da83ac10b3b2bf38e0 0 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de99e2f17eac69da83ac10b3b2bf38e0 0 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de99e2f17eac69da83ac10b3b2bf38e0 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ti2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ti2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ti2 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1de2867cf4abec2fc00fd1d737cd35c2b8dbe72c2b518aef14c6fe9ba9bde12b 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5WJ 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1de2867cf4abec2fc00fd1d737cd35c2b8dbe72c2b518aef14c6fe9ba9bde12b 3 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1de2867cf4abec2fc00fd1d737cd35c2b8dbe72c2b518aef14c6fe9ba9bde12b 3 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1de2867cf4abec2fc00fd1d737cd35c2b8dbe72c2b518aef14c6fe9ba9bde12b 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5WJ 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5WJ 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5WJ 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3713023 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3713023 ']' 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.949 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GNk 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ZOZ ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZOZ 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.g4h 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Mce ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mce 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6Sn 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fJg ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fJg 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Q8S 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ti2 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ti2 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5WJ 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:55.208 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:55.466 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:55.466 13:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:56.398 Waiting for block devices as requested 00:31:56.398 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:56.655 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:56.655 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:56.655 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:56.655 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:56.914 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:56.914 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:56.914 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:56.914 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:57.173 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:57.173 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:57.173 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:57.431 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:57.431 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:57.431 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:57.431 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:57.688 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:57.946 13:39:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:58.204 No valid GPT data, bailing 00:31:58.204 13:39:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:58.204 13:39:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:58.204 13:39:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:58.204 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:58.205 00:31:58.205 Discovery Log Number of Records 2, Generation counter 2 00:31:58.205 =====Discovery Log Entry 0====== 00:31:58.205 trtype: tcp 00:31:58.205 adrfam: ipv4 00:31:58.205 subtype: current discovery subsystem 00:31:58.205 treq: not specified, sq flow control disable supported 00:31:58.205 portid: 1 00:31:58.205 trsvcid: 4420 00:31:58.205 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:58.205 traddr: 10.0.0.1 00:31:58.205 eflags: none 00:31:58.205 sectype: none 00:31:58.205 =====Discovery Log Entry 1====== 00:31:58.205 trtype: tcp 00:31:58.205 adrfam: ipv4 00:31:58.205 subtype: nvme subsystem 00:31:58.205 treq: not specified, sq flow control disable supported 00:31:58.205 portid: 1 00:31:58.205 trsvcid: 4420 00:31:58.205 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:58.205 traddr: 10.0.0.1 00:31:58.205 eflags: none 00:31:58.205 sectype: none 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.205 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.463 nvme0n1 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.463 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.721 nvme0n1 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.721 13:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.721 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.722 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.980 nvme0n1 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.980 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.981 nvme0n1 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.981 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 nvme0n1 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.275 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.557 nvme0n1 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.557 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.558 13:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 nvme0n1 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:59.816 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.817 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.075 nvme0n1 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.075 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.076 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.076 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.076 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.076 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.334 nvme0n1 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.334 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.593 nvme0n1 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.593 13:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 nvme0n1 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.851 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.852 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.110 nvme0n1 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.110 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.367 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.368 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.368 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:01.368 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.368 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.625 nvme0n1 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.625 13:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.884 nvme0n1 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.884 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.142 nvme0n1 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.142 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.706 nvme0n1 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:02.706 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.707 13:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.274 nvme0n1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.274 13:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.839 nvme0n1 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.839 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.402 nvme0n1 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.402 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.403 13:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.967 nvme0n1 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.967 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.225 nvme0n1 00:32:05.225 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.225 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.225 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.225 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.225 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.482 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.483 13:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.412 nvme0n1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.412 13:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.346 nvme0n1 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.346 13:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 nvme0n1 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.278 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.279 13:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.844 nvme0n1 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.102 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.103 13:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 nvme0n1 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 nvme0n1 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.038 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 nvme0n1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.297 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.556 nvme0n1 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.556 13:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.814 nvme0n1 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.814 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 nvme0n1 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.072 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.330 nvme0n1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.330 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.589 nvme0n1 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.589 13:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.847 nvme0n1 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.847 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.105 nvme0n1 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.105 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.363 nvme0n1 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.363 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.364 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 nvme0n1 00:32:12.621 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.621 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.621 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.621 13:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 13:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.621 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.622 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.622 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.622 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.879 nvme0n1 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.879 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.138 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.396 nvme0n1 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.396 13:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.654 nvme0n1 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.654 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.941 nvme0n1 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.941 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.198 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.455 nvme0n1 00:32:14.455 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.455 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.455 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.455 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.456 13:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.713 13:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.713 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.713 13:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.971 nvme0n1 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.971 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.229 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.487 nvme0n1 00:32:15.487 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.487 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.487 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.487 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.487 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.746 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.746 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.746 13:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.746 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.746 13:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.746 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.311 nvme0n1 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.311 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.312 13:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 nvme0n1 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.875 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 nvme0n1 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 13:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.806 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.738 nvme0n1 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.738 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.739 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.739 13:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.739 13:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.739 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.739 13:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.670 nvme0n1 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.670 13:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.603 nvme0n1 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.603 13:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.536 nvme0n1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 nvme0n1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.537 13:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.795 nvme0n1 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.795 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.053 nvme0n1 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:22.053 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.054 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.312 nvme0n1 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.312 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.313 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.313 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.313 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.313 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.571 nvme0n1 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.571 13:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.830 nvme0n1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.830 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.089 nvme0n1 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.089 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.347 nvme0n1 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.347 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.348 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.605 nvme0n1 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.605 13:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.864 nvme0n1 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.864 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.122 nvme0n1 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.122 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.123 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.380 nvme0n1 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.380 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.638 13:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.897 nvme0n1 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.897 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.155 nvme0n1 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.155 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.156 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.414 nvme0n1 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.414 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.673 13:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.931 nvme0n1 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.931 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.189 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 nvme0n1 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.448 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.706 13:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.272 nvme0n1 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.272 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.273 13:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 nvme0n1 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.839 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.097 nvme0n1 00:32:28.097 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFiY2M4YzZhNzY0OWNmZTBjYzQ1OTUxMDQyOTQzOGRqHC9O: 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczODg1YjQ1YWRkMTBmODljOTcxYzUyOTY1MDFmNjUwMzUwYWEyZWM4ZDg5MmExNjc3M2UzZmZjYmRkYTU1MIQhwAM=: 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.385 13:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.386 13:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.386 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.386 13:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.319 nvme0n1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.319 13:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.252 nvme0n1 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjNkYTI0YjQ5ZDI4ZTc5MGFkOGFlMDIwNGYyOWEwZTRZ3XoJ: 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI1MTEyODUxYmQ5ZTZlYjI1OTBmZDI2M2U1ODBkODYqKih/: 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.252 13:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.818 nvme0n1 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.818 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDgxYzVmNzEzNzczN2RlY2M2MTMxZTZmZGMxOGFmZGE1MDdjOWViM2ZjNzNlOWQzV1OtpQ==: 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU5OWUyZjE3ZWFjNjlkYTgzYWMxMGIzYjJiZjM4ZTBX5kI0: 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.075 13:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.076 13:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.076 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.076 13:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.008 nvme0n1 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWRlMjg2N2NmNGFiZWMyZmMwMGZkMWQ3MzdjZDM1YzJiOGRiZTcyYzJiNTE4YWVmMTRjNmZlOWJhOWJkZTEyYkx7wxg=: 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.008 13:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 nvme0n1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJlZDY2M2YyN2I4NGMxNGM2NjNmNmNlMDJiZTZmMmU1MzEzMzY3NjdlNzM4ZGQzDyt3nA==: 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiOWI4MzkwMDBkNDlkYTI2ODNiNWY4MmU3OTM4YjQwMzAxMDJlZmExNWE1ZDVhKB9ovQ==: 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 request: 00:32:32.940 { 00:32:32.940 "name": "nvme0", 00:32:32.940 "trtype": "tcp", 00:32:32.940 "traddr": "10.0.0.1", 00:32:32.940 "adrfam": "ipv4", 00:32:32.940 "trsvcid": "4420", 00:32:32.940 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.940 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.940 "prchk_reftag": false, 00:32:32.940 "prchk_guard": false, 00:32:32.940 "hdgst": false, 00:32:32.940 "ddgst": false, 00:32:32.940 "method": "bdev_nvme_attach_controller", 00:32:32.940 "req_id": 1 00:32:32.940 } 00:32:32.940 Got JSON-RPC error response 00:32:32.940 response: 00:32:32.940 { 00:32:32.940 "code": -5, 00:32:32.940 "message": "Input/output error" 00:32:32.940 } 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 request: 00:32:32.940 { 00:32:32.940 "name": "nvme0", 00:32:32.940 "trtype": "tcp", 00:32:32.940 "traddr": "10.0.0.1", 00:32:32.940 "adrfam": "ipv4", 00:32:32.940 "trsvcid": "4420", 00:32:32.940 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.940 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.940 "prchk_reftag": false, 00:32:32.940 "prchk_guard": false, 00:32:32.940 "hdgst": false, 00:32:32.940 "ddgst": false, 00:32:32.940 "dhchap_key": "key2", 00:32:32.940 "method": "bdev_nvme_attach_controller", 00:32:32.940 "req_id": 1 00:32:32.940 } 00:32:32.940 Got JSON-RPC error response 00:32:32.940 response: 00:32:32.940 { 00:32:32.940 "code": -5, 00:32:32.940 "message": "Input/output error" 00:32:32.940 } 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.940 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.941 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.941 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.197 request: 00:32:33.197 { 00:32:33.197 "name": "nvme0", 00:32:33.197 "trtype": "tcp", 00:32:33.197 "traddr": "10.0.0.1", 00:32:33.197 "adrfam": "ipv4", 00:32:33.197 "trsvcid": "4420", 00:32:33.197 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:33.197 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:33.197 "prchk_reftag": false, 00:32:33.197 "prchk_guard": false, 00:32:33.197 "hdgst": false, 00:32:33.197 "ddgst": false, 00:32:33.197 "dhchap_key": "key1", 00:32:33.197 "dhchap_ctrlr_key": "ckey2", 00:32:33.197 "method": "bdev_nvme_attach_controller", 00:32:33.197 "req_id": 1 00:32:33.197 } 00:32:33.197 Got JSON-RPC error response 00:32:33.197 response: 00:32:33.197 { 00:32:33.197 "code": -5, 00:32:33.197 "message": "Input/output error" 00:32:33.198 } 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:33.198 rmmod nvme_tcp 00:32:33.198 rmmod nvme_fabrics 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3713023 ']' 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3713023 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3713023 ']' 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3713023 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713023 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713023' 00:32:33.198 killing process with pid 3713023 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3713023 00:32:33.198 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3713023 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:33.454 13:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:35.355 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:35.613 13:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:36.545 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:36.545 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:36.545 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:36.545 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:36.545 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:36.803 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:36.803 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:36.803 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:36.803 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:37.743 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:37.743 13:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GNk /tmp/spdk.key-null.g4h /tmp/spdk.key-sha256.6Sn /tmp/spdk.key-sha384.Q8S /tmp/spdk.key-sha512.5WJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:37.743 13:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:39.118 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:39.118 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:39.118 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:39.118 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:39.118 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:39.118 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:39.118 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:39.118 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:39.118 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:39.118 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:39.118 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:39.118 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:39.118 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:39.118 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:39.118 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:39.118 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:39.118 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:39.118 00:32:39.118 real 0m47.262s 00:32:39.118 user 0m44.498s 00:32:39.118 sys 0m5.917s 00:32:39.118 13:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.118 13:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.118 ************************************ 00:32:39.118 END TEST nvmf_auth_host 00:32:39.118 ************************************ 00:32:39.118 13:40:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:39.118 13:40:36 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:32:39.118 13:40:36 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:39.118 13:40:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.118 13:40:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.118 13:40:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.118 ************************************ 00:32:39.118 START TEST nvmf_digest 00:32:39.118 ************************************ 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:39.118 * Looking for test storage... 00:32:39.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.118 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:39.377 13:40:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:41.279 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:41.279 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:41.279 Found net devices under 0000:09:00.0: cvl_0_0 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:41.279 Found net devices under 0000:09:00.1: cvl_0_1 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:32:41.279 00:32:41.279 --- 10.0.0.2 ping statistics --- 00:32:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.279 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:41.279 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:32:41.538 00:32:41.538 --- 10.0.0.1 ping statistics --- 00:32:41.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.538 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:41.538 ************************************ 00:32:41.538 START TEST nvmf_digest_clean 00:32:41.538 ************************************ 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3722194 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3722194 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3722194 ']' 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:41.538 13:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.538 [2024-07-12 13:40:38.854733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:41.539 [2024-07-12 13:40:38.854819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.539 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.539 [2024-07-12 13:40:38.896413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:41.539 [2024-07-12 13:40:38.924009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.797 [2024-07-12 13:40:39.015218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.797 [2024-07-12 13:40:39.015263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.797 [2024-07-12 13:40:39.015286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.797 [2024-07-12 13:40:39.015306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.797 [2024-07-12 13:40:39.015343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.797 [2024-07-12 13:40:39.015382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.797 null0 00:32:41.797 [2024-07-12 13:40:39.206878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.797 [2024-07-12 13:40:39.231083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3722219 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3722219 /var/tmp/bperf.sock 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3722219 ']' 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:41.797 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:42.055 [2024-07-12 13:40:39.278989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:42.055 [2024-07-12 13:40:39.279065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722219 ] 00:32:42.055 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.055 [2024-07-12 13:40:39.312008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:42.055 [2024-07-12 13:40:39.339924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.055 [2024-07-12 13:40:39.427115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.055 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.055 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:42.055 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:42.055 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:42.055 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:42.620 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.620 13:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.878 nvme0n1 00:32:42.878 13:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:42.878 13:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:43.135 Running I/O for 2 seconds... 00:32:45.035 00:32:45.035 Latency(us) 00:32:45.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:45.035 nvme0n1 : 2.00 20193.15 78.88 0.00 0.00 6330.96 2779.21 13689.74 00:32:45.035 =================================================================================================================== 00:32:45.035 Total : 20193.15 78.88 0.00 0.00 6330.96 2779.21 13689.74 00:32:45.035 0 00:32:45.035 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:45.035 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:45.035 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:45.035 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:45.035 | select(.opcode=="crc32c") 00:32:45.035 | "\(.module_name) \(.executed)"' 00:32:45.035 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3722219 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3722219 ']' 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3722219 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722219 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722219' 00:32:45.293 killing process with pid 3722219 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3722219 00:32:45.293 Received shutdown signal, test time was about 2.000000 seconds 00:32:45.293 00:32:45.293 Latency(us) 00:32:45.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.293 =================================================================================================================== 00:32:45.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.293 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3722219 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3722628 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3722628 /var/tmp/bperf.sock 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3722628 ']' 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.555 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:45.556 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.556 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:45.556 13:40:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.556 [2024-07-12 13:40:43.017175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:45.556 [2024-07-12 13:40:43.017253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722628 ] 00:32:45.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.556 Zero copy mechanism will not be used. 00:32:45.862 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.862 [2024-07-12 13:40:43.048183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:45.862 [2024-07-12 13:40:43.074731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.862 [2024-07-12 13:40:43.158974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.862 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.862 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:45.862 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:45.862 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:45.862 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:46.119 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.119 13:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.684 nvme0n1 00:32:46.684 13:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:46.684 13:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:46.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:46.943 Zero copy mechanism will not be used. 00:32:46.943 Running I/O for 2 seconds... 00:32:48.839 00:32:48.839 Latency(us) 00:32:48.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.839 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:48.839 nvme0n1 : 2.01 2587.36 323.42 0.00 0.00 6179.30 1444.22 13204.29 00:32:48.839 =================================================================================================================== 00:32:48.839 Total : 2587.36 323.42 0.00 0.00 6179.30 1444.22 13204.29 00:32:48.839 0 00:32:48.839 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:48.839 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:48.839 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:48.839 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:48.839 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:48.839 | select(.opcode=="crc32c") 00:32:48.839 | "\(.module_name) \(.executed)"' 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3722628 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3722628 ']' 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3722628 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722628 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722628' 00:32:49.097 killing process with pid 3722628 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3722628 00:32:49.097 Received shutdown signal, test time was about 2.000000 seconds 00:32:49.097 00:32:49.097 Latency(us) 00:32:49.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.097 =================================================================================================================== 00:32:49.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.097 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3722628 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3723156 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3723156 /var/tmp/bperf.sock 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3723156 ']' 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.354 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.354 [2024-07-12 13:40:46.740082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:49.354 [2024-07-12 13:40:46.740161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723156 ] 00:32:49.354 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.354 [2024-07-12 13:40:46.771025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:49.354 [2024-07-12 13:40:46.797996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.611 [2024-07-12 13:40:46.882697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.611 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.611 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:49.611 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:49.611 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:49.611 13:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:49.868 13:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.868 13:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.125 nvme0n1 00:32:50.125 13:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:50.125 13:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.383 Running I/O for 2 seconds... 00:32:52.278 00:32:52.278 Latency(us) 00:32:52.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.278 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.278 nvme0n1 : 2.00 21920.89 85.63 0.00 0.00 5829.86 2500.08 11893.57 00:32:52.278 =================================================================================================================== 00:32:52.278 Total : 21920.89 85.63 0.00 0.00 5829.86 2500.08 11893.57 00:32:52.278 0 00:32:52.278 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:52.278 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:52.279 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:52.279 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:52.279 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:52.279 | select(.opcode=="crc32c") 00:32:52.279 | "\(.module_name) \(.executed)"' 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3723156 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3723156 ']' 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3723156 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:52.536 13:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723156 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723156' 00:32:52.794 killing process with pid 3723156 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3723156 00:32:52.794 Received shutdown signal, test time was about 2.000000 seconds 00:32:52.794 00:32:52.794 Latency(us) 00:32:52.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.794 =================================================================================================================== 00:32:52.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3723156 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3723560 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3723560 /var/tmp/bperf.sock 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3723560 ']' 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.794 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.052 [2024-07-12 13:40:50.272857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:53.052 [2024-07-12 13:40:50.272923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723560 ] 00:32:53.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.052 Zero copy mechanism will not be used. 00:32:53.052 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.052 [2024-07-12 13:40:50.304079] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:53.052 [2024-07-12 13:40:50.329924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.052 [2024-07-12 13:40:50.413363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.052 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.052 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:53.052 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:53.052 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:53.052 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:53.616 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.616 13:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.873 nvme0n1 00:32:53.873 13:40:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:53.873 13:40:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:54.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:54.130 Zero copy mechanism will not be used. 00:32:54.130 Running I/O for 2 seconds... 00:32:56.027 00:32:56.027 Latency(us) 00:32:56.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.027 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:56.027 nvme0n1 : 2.01 1786.99 223.37 0.00 0.00 8928.11 6650.69 19709.35 00:32:56.027 =================================================================================================================== 00:32:56.027 Total : 1786.99 223.37 0.00 0.00 8928.11 6650.69 19709.35 00:32:56.027 0 00:32:56.027 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:56.027 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:56.027 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:56.027 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:56.027 | select(.opcode=="crc32c") 00:32:56.027 | "\(.module_name) \(.executed)"' 00:32:56.027 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3723560 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3723560 ']' 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3723560 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723560 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723560' 00:32:56.283 killing process with pid 3723560 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3723560 00:32:56.283 Received shutdown signal, test time was about 2.000000 seconds 00:32:56.283 00:32:56.283 Latency(us) 00:32:56.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.283 =================================================================================================================== 00:32:56.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.283 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3723560 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3722194 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3722194 ']' 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3722194 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722194 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722194' 00:32:56.541 killing process with pid 3722194 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3722194 00:32:56.541 13:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3722194 00:32:56.799 00:32:56.799 real 0m15.379s 00:32:56.799 user 0m30.301s 00:32:56.799 sys 0m4.019s 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:56.799 ************************************ 00:32:56.799 END TEST nvmf_digest_clean 00:32:56.799 ************************************ 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:56.799 ************************************ 00:32:56.799 START TEST nvmf_digest_error 00:32:56.799 ************************************ 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3723999 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3723999 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3723999 ']' 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.799 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.057 [2024-07-12 13:40:54.281879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:57.057 [2024-07-12 13:40:54.281948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.057 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.057 [2024-07-12 13:40:54.319090] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:57.057 [2024-07-12 13:40:54.344774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.057 [2024-07-12 13:40:54.429365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.057 [2024-07-12 13:40:54.429417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.057 [2024-07-12 13:40:54.429439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.057 [2024-07-12 13:40:54.429459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.057 [2024-07-12 13:40:54.429474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.057 [2024-07-12 13:40:54.429509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.057 [2024-07-12 13:40:54.514212] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.057 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.315 null0 00:32:57.315 [2024-07-12 13:40:54.616373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.315 [2024-07-12 13:40:54.640554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3724142 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3724142 /var/tmp/bperf.sock 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3724142 ']' 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.315 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.315 [2024-07-12 13:40:54.684379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:32:57.315 [2024-07-12 13:40:54.684439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724142 ] 00:32:57.315 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.315 [2024-07-12 13:40:54.716097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:57.315 [2024-07-12 13:40:54.741802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.572 [2024-07-12 13:40:54.825186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.572 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.572 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:57.572 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.572 13:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.830 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.088 nvme0n1 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:58.088 13:40:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.346 Running I/O for 2 seconds... 00:32:58.346 [2024-07-12 13:40:55.656433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.656477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.656495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.671978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.672007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.672023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.685791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.685839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.697477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.697507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.712137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.712164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.712180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.724401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.724430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.724447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.736770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.736800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.736816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.751408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.751437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.751453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.763484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.763514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.763530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.774391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.774420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.774442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.788429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.788460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.788477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.801331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.801386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.346 [2024-07-12 13:40:55.801404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.346 [2024-07-12 13:40:55.812930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.346 [2024-07-12 13:40:55.812958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.347 [2024-07-12 13:40:55.812974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.826566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.826608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.826623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.839530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.839575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.851681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.851724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.851740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.864671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.864715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.864731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.875419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.875449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.875466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.889312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.889351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.889367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.902181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.902240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.914787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.914815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.928792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.928821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.928836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.940480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.940509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.940525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.954954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.604 [2024-07-12 13:40:55.954982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.604 [2024-07-12 13:40:55.954998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.604 [2024-07-12 13:40:55.965914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:55.965943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:55.965958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:55.978795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:55.978821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:55.978836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:55.991436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:55.991462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:55.991477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.004234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.004262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.004277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.015522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.015550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.015564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.030205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.030233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.030248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.042756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.042783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.042797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.055232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.055262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.055278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.605 [2024-07-12 13:40:56.066284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.605 [2024-07-12 13:40:56.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.605 [2024-07-12 13:40:56.066347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.863 [2024-07-12 13:40:56.081423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.863 [2024-07-12 13:40:56.081452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.863 [2024-07-12 13:40:56.081469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.863 [2024-07-12 13:40:56.093900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.863 [2024-07-12 13:40:56.093930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.863 [2024-07-12 13:40:56.093945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.863 [2024-07-12 13:40:56.106505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.863 [2024-07-12 13:40:56.106533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.863 [2024-07-12 13:40:56.106555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.863 [2024-07-12 13:40:56.118709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.118753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.118770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.131447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.131476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.131493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.144577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.144607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.144623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.156219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.156245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.156259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.171350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.171379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.171395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.182805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.182833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.182848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.195988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.196029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.196044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.209921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.209948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.209963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.220826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.220852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.220867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.236554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.236583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.236599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.250968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.250998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.261840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.261868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.261884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.277214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.277241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.277256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.289062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.289090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.289104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.301722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.301752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.301768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.313405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.313435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.313450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.864 [2024-07-12 13:40:56.325507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:58.864 [2024-07-12 13:40:56.325537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.864 [2024-07-12 13:40:56.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.340301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.340339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.340366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.351671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.351717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.364087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.364116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.364146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.378415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.389564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.389606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.389621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.402722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.402749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.402764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.415840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.415869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.415885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.429304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.429371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.442324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.442375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.442392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.454169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.454198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.454214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.466132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.466161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.466177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.479473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.479502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.122 [2024-07-12 13:40:56.479518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.122 [2024-07-12 13:40:56.492331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.122 [2024-07-12 13:40:56.492361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.492378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.505453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.505483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.505500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.516639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.516683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.529828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.529859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.529876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.546349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.546378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.546396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.557099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.557129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.557145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.572270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.572299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.572337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.123 [2024-07-12 13:40:56.586700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.123 [2024-07-12 13:40:56.586729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.123 [2024-07-12 13:40:56.586745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.600402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.600432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.600449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.612293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.612344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.612361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.624506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.624536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.624552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.639234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.639262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.639277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.652357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.652388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.652405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.663207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.663238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.663260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.678086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.678117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.678134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.689032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.689080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.703084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.703114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.703131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.714129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.714157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.714171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.728492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.728521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.728537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.741748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.741779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.753329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.753372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.766200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.766227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.766242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.779547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.779583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.779600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.793123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.793153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.793169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.805248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.805291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.805307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.816586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.816631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.816648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.829982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.830009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.830024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.382 [2024-07-12 13:40:56.843686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.382 [2024-07-12 13:40:56.843713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.382 [2024-07-12 13:40:56.843728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.856022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.856068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.866754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.866781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.866795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.881325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.881355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.881371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.893020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.893049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.893066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.905613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.905642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.905659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.918167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.918194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.918209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.932166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.932211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.932227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.944239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.944286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.956563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.956592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.956609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.968644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.968688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.968704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.981401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.981430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.981447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:56.992575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:56.992618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:56.992644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.007145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.007172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.007187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.019378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.019409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.019426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.032538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.032568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.032584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.045027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.045057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.045073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.055836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.055866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.055882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.070147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.070175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.070190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.082805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.082834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.082851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.093572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.093600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.093629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.641 [2024-07-12 13:40:57.108082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.641 [2024-07-12 13:40:57.108115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.641 [2024-07-12 13:40:57.108131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.120546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.120576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.133592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.133636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.133652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.147084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.147113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.147129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.160805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.160834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.160850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.171103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.171132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.171147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.187434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.187463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.187478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.200590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.200620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.200636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.211325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.211354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.226218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.226247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.226263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.238019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.238047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.238062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.250780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.250809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.266738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.266765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.279664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.279708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.279724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.291391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.291419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.291434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.304609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.304638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.304654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.317114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.317143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.900 [2024-07-12 13:40:57.317174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.900 [2024-07-12 13:40:57.330217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.900 [2024-07-12 13:40:57.330253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.901 [2024-07-12 13:40:57.330269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.901 [2024-07-12 13:40:57.343214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.901 [2024-07-12 13:40:57.343242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.901 [2024-07-12 13:40:57.343258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.901 [2024-07-12 13:40:57.354311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.901 [2024-07-12 13:40:57.354347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.901 [2024-07-12 13:40:57.354363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.901 [2024-07-12 13:40:57.369175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:32:59.901 [2024-07-12 13:40:57.369204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.901 [2024-07-12 13:40:57.369219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.382047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.382092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.393703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.393733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.393749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.407946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.407977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.407993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.418985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.419012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.419027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.433022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.433049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.433064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.445566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.445596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.445614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.457132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.457175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.457190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.471055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.471085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.471102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.482347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.482391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.482406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.495473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.495502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.495519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.508751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.508780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.508796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.522134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.522161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.522176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.532816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.532843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.532858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.546683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.546713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.546736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.559263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.559294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.559311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.570248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.570275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.570291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.585162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.585190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.585206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.159 [2024-07-12 13:40:57.598882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.159 [2024-07-12 13:40:57.598911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.159 [2024-07-12 13:40:57.598942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.160 [2024-07-12 13:40:57.611992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.160 [2024-07-12 13:40:57.612023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.160 [2024-07-12 13:40:57.612040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.160 [2024-07-12 13:40:57.624195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.160 [2024-07-12 13:40:57.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.160 [2024-07-12 13:40:57.624253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.417 [2024-07-12 13:40:57.637810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3e0d0) 00:33:00.417 [2024-07-12 13:40:57.637839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.417 [2024-07-12 13:40:57.637854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.417 00:33:00.417 Latency(us) 00:33:00.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.417 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:00.417 nvme0n1 : 2.01 19826.20 77.45 0.00 0.00 6447.06 3640.89 17864.63 00:33:00.417 =================================================================================================================== 00:33:00.417 Total : 19826.20 77.45 0.00 0.00 6447.06 3640.89 17864.63 00:33:00.417 0 00:33:00.417 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:00.417 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:00.417 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:00.417 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:00.417 | .driver_specific 00:33:00.417 | .nvme_error 00:33:00.417 | .status_code 00:33:00.417 | .command_transient_transport_error' 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3724142 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3724142 ']' 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3724142 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724142 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724142' 00:33:00.675 killing process with pid 3724142 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3724142 00:33:00.675 Received shutdown signal, test time was about 2.000000 seconds 00:33:00.675 00:33:00.675 Latency(us) 00:33:00.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.675 =================================================================================================================== 00:33:00.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.675 13:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3724142 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3724544 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3724544 /var/tmp/bperf.sock 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3724544 ']' 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.965 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.965 [2024-07-12 13:40:58.260072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:00.966 [2024-07-12 13:40:58.260149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724544 ] 00:33:00.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.966 Zero copy mechanism will not be used. 00:33:00.966 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.966 [2024-07-12 13:40:58.290668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:00.966 [2024-07-12 13:40:58.317168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.966 [2024-07-12 13:40:58.398225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.224 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.224 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:01.224 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.224 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.481 13:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.738 nvme0n1 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:01.738 13:40:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:01.996 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:01.996 Zero copy mechanism will not be used. 00:33:01.996 Running I/O for 2 seconds... 00:33:01.996 [2024-07-12 13:40:59.253514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.253562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.253581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.264004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.264036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.273542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.273574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.273605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.283806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.283851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.294066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.294109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.294124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.304279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.304335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.304353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.314461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.314507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.324546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.324574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.324589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.335019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.335047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.335062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.996 [2024-07-12 13:40:59.345375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.996 [2024-07-12 13:40:59.345418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.996 [2024-07-12 13:40:59.345434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.355682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.355730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.355746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.366943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.366974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.366992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.378214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.378244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.378274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.389029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.389060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.389076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.400882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.400911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.400942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.412473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.412519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.412535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.423536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.423581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.423597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.433602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.433665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.443762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.443806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.443821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.453982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.454025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.454040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.997 [2024-07-12 13:40:59.464038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:01.997 [2024-07-12 13:40:59.464069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.997 [2024-07-12 13:40:59.464085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.474261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.474290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.474332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.484982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.485028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.485044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.495914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.495945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.495962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.506689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.506732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.516863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.516907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.516922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.526943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.526987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.527002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.537068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.537097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.537120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.547249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.547292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.547307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.557408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.557454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.557470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.254 [2024-07-12 13:40:59.567636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.254 [2024-07-12 13:40:59.567665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.254 [2024-07-12 13:40:59.567697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.578062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.578106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.578121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.588464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.588494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.588510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.598596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.598652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.598667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.608717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.608746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.618666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.618695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.618710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.628718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.628766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.639211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.639267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.649778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.649820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.649835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.659840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.659882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.659897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.669951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.669995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.670010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.680077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.680123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.680138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.690254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.690297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.700388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.700446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.710412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.710440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.710455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.255 [2024-07-12 13:40:59.720499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.255 [2024-07-12 13:40:59.720543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.255 [2024-07-12 13:40:59.720558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.513 [2024-07-12 13:40:59.730523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.513 [2024-07-12 13:40:59.730553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.730569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.740642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.740687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.740704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.750758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.750801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.750816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.760969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.761013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.761028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.771136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.771165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.771196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.781421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.781449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.781465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.791389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.791449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.801292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.801328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.811249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.811292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.811307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.821313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.821350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.821365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.831460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.831502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.831518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.841536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.841566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.841583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.851477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.851506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.851538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.861522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.861551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.861567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.871502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.871546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.871563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.881640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.881684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.881698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.891690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.891732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.891747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.901764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.901807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.901822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.911801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.911843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.911858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.922060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.922103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.922119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.932274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.932327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.932361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.942446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.942475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.942491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.952632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.952675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.952691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.962733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.962776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.962791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.972837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.972882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.514 [2024-07-12 13:40:59.983119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.514 [2024-07-12 13:40:59.983148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.514 [2024-07-12 13:40:59.983163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:40:59.993437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:40:59.993491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:40:59.993508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.003525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.003556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.003572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.013562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.013592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.013609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.023600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.023644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.023661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.033824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.033856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.033886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.044446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.044486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.044502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.054639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.054687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.054703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.064883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.064921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.064937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.075221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.075251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.075267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.085454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.085485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.085501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.095796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.095839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.095854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.105999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.106027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.106041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.116288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.116339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.116354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.126544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.126572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.126587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.136560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.136591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.136622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.146783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.146827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.146843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.157013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.157066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.157081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.167210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.167252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.167267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.773 [2024-07-12 13:41:00.177427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.773 [2024-07-12 13:41:00.177470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.773 [2024-07-12 13:41:00.177486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.187382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.187425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.187440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.197344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.197393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.197409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.207406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.207450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.207466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.217594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.217637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.217652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.227727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.227770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.774 [2024-07-12 13:41:00.238021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:02.774 [2024-07-12 13:41:00.238062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.774 [2024-07-12 13:41:00.238082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.248256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.248285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.248302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.258233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.258276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.258291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.268387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.268432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.268448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.278518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.278547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.278579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.288661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.288689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.288722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.298619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.298662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.298677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.308647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.308707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.318637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.318680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.318695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.328703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.328752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.328768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.339166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.339194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.339223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.349168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.349225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.359216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.359245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.359260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.369257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.369300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.369327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.379300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.379369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.389507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.032 [2024-07-12 13:41:00.389535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.032 [2024-07-12 13:41:00.389565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.032 [2024-07-12 13:41:00.399659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.399718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.410134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.410162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.410190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.420071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.420115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.420131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.430213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.430255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.440472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.450519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.450546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.450561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.460743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.460786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.460801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.470932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.470961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.470976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.481112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.481154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.481170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.491138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.491165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.491180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.033 [2024-07-12 13:41:00.501326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.033 [2024-07-12 13:41:00.501355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.033 [2024-07-12 13:41:00.501393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.511629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.521846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.521889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.521904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.531849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.531877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.531906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.542337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.542380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.542397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.552503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.552532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.552563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.562419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.562463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.572428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.572471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.572487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.582747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.582791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.582807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.593081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.593142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.603233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.603276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.603292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.613208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.613254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.613270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.623622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.623677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.633764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.633807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.633822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.644065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.644109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.644125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.654375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.654403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.654419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.664551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.664595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.664610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.674585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.684787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.684830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.684845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.695189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.695217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.695247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.705389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.705434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.715474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.715502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.292 [2024-07-12 13:41:00.715518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.292 [2024-07-12 13:41:00.725502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.292 [2024-07-12 13:41:00.725546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.293 [2024-07-12 13:41:00.725562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.293 [2024-07-12 13:41:00.735723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.293 [2024-07-12 13:41:00.735765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.293 [2024-07-12 13:41:00.735780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.293 [2024-07-12 13:41:00.745919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.293 [2024-07-12 13:41:00.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.293 [2024-07-12 13:41:00.745977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.293 [2024-07-12 13:41:00.756087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.293 [2024-07-12 13:41:00.756114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.293 [2024-07-12 13:41:00.756129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.766411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.766445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.766462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.776520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.776550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.786565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.786595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.786611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.797070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.797114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.797130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.807251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.807294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.807309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.817329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.817357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.817374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.827328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.827371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.827387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.837481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.837509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.837538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.847562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.847592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.847608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.857711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.857753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.857768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.867733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.867777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.877814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.877840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.877869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.887937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.887979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.887994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.898079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.898105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.898119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.908068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.908109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.908123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.918206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.918248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.928373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.928417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.928432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.938154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.938196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.938215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.948288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.948340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.948359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.958366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.958406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.958421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.968401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.968428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.968442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.551 [2024-07-12 13:41:00.978529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.551 [2024-07-12 13:41:00.978573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.551 [2024-07-12 13:41:00.978589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.552 [2024-07-12 13:41:00.988771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.552 [2024-07-12 13:41:00.988813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.552 [2024-07-12 13:41:00.988829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.552 [2024-07-12 13:41:00.998775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.552 [2024-07-12 13:41:00.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.552 [2024-07-12 13:41:00.998815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.552 [2024-07-12 13:41:01.008713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.552 [2024-07-12 13:41:01.008758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.552 [2024-07-12 13:41:01.008775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.552 [2024-07-12 13:41:01.018698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.552 [2024-07-12 13:41:01.018742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.552 [2024-07-12 13:41:01.018759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.028700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.028749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.810 [2024-07-12 13:41:01.028765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.038738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.038780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.810 [2024-07-12 13:41:01.038795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.048929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.048957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.810 [2024-07-12 13:41:01.048986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.059172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.059219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.810 [2024-07-12 13:41:01.059234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.069436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.069483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.810 [2024-07-12 13:41:01.069498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.810 [2024-07-12 13:41:01.079568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.810 [2024-07-12 13:41:01.079624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.079639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.089814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.089862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.089877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.100076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.100104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.100135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.110292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.110343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.110365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.120473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.120516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.120532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.130865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.130909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.130925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.141144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.141186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.141200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.151341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.151400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.161485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.161527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.161542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.171586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.171614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.171644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.181749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.181777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.192051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.192078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.192093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.202487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.202537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.202554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.212871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.212915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.212930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.223133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.223175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.223189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.233457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.233501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.233516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.811 [2024-07-12 13:41:01.243497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d18f00) 00:33:03.811 [2024-07-12 13:41:01.243527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.811 [2024-07-12 13:41:01.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.811 00:33:03.811 Latency(us) 00:33:03.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:03.811 nvme0n1 : 2.01 3040.32 380.04 0.00 0.00 5257.94 1498.83 12136.30 00:33:03.811 =================================================================================================================== 00:33:03.811 Total : 3040.32 380.04 0.00 0.00 5257.94 1498.83 12136.30 00:33:03.811 0 00:33:03.811 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:03.811 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:03.811 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:03.811 | .driver_specific 00:33:03.811 | .nvme_error 00:33:03.811 | .status_code 00:33:03.811 | .command_transient_transport_error' 00:33:03.811 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3724544 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3724544 ']' 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3724544 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724544 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724544' 00:33:04.069 killing process with pid 3724544 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3724544 00:33:04.069 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.069 00:33:04.069 Latency(us) 00:33:04.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.069 =================================================================================================================== 00:33:04.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.069 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3724544 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3724955 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3724955 /var/tmp/bperf.sock 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3724955 ']' 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.327 13:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.327 [2024-07-12 13:41:01.779930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:04.327 [2024-07-12 13:41:01.780008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724955 ] 00:33:04.584 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.584 [2024-07-12 13:41:01.811166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:04.584 [2024-07-12 13:41:01.839221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.584 [2024-07-12 13:41:01.928667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.584 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.584 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:04.584 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.584 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.841 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.405 nvme0n1 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:05.405 13:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.405 Running I/O for 2 seconds... 00:33:05.405 [2024-07-12 13:41:02.876429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ed920 00:33:05.405 [2024-07-12 13:41:02.877662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.405 [2024-07-12 13:41:02.877699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.889209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fef90 00:33:05.663 [2024-07-12 13:41:02.890364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.890393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.901725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ee190 00:33:05.663 [2024-07-12 13:41:02.902989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.903016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.912861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4578 00:33:05.663 [2024-07-12 13:41:02.914133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.914160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.923917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e7818 00:33:05.663 [2024-07-12 13:41:02.924816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.924859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.935828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f0788 00:33:05.663 [2024-07-12 13:41:02.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.936628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.948134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ebfd0 00:33:05.663 [2024-07-12 13:41:02.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.949085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.962296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fbcf0 00:33:05.663 [2024-07-12 13:41:02.964098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.964125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.970596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ecc78 00:33:05.663 [2024-07-12 13:41:02.971473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.971499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.981665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb048 00:33:05.663 [2024-07-12 13:41:02.982486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.982513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:02.994792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f57b0 00:33:05.663 [2024-07-12 13:41:02.995863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:02.995904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.006793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f6890 00:33:05.663 [2024-07-12 13:41:03.007819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.007861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.018660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f81e0 00:33:05.663 [2024-07-12 13:41:03.019775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.019817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.030552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f92c0 00:33:05.663 [2024-07-12 13:41:03.031589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.042516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190dece0 00:33:05.663 [2024-07-12 13:41:03.043547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.043589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.054506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e0a68 00:33:05.663 [2024-07-12 13:41:03.055514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.055556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.066483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e1b48 00:33:05.663 [2024-07-12 13:41:03.067519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.067561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.078571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eaab8 00:33:05.663 [2024-07-12 13:41:03.079614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.079656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.090451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fc998 00:33:05.663 [2024-07-12 13:41:03.091479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.091507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.102400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fe720 00:33:05.663 [2024-07-12 13:41:03.103416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.103458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.114279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190df550 00:33:05.663 [2024-07-12 13:41:03.115333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.115375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.663 [2024-07-12 13:41:03.126283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190de038 00:33:05.663 [2024-07-12 13:41:03.127297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.663 [2024-07-12 13:41:03.127346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.138405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb480 00:33:05.920 [2024-07-12 13:41:03.139517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.139546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.150369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fa3a0 00:33:05.920 [2024-07-12 13:41:03.151396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.162377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fdeb0 00:33:05.920 [2024-07-12 13:41:03.163391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.163433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.174307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f0bc0 00:33:05.920 [2024-07-12 13:41:03.175339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.187676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f4b08 00:33:05.920 [2024-07-12 13:41:03.189234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.189260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.199998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e5220 00:33:05.920 [2024-07-12 13:41:03.201772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.201798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.212363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f2d80 00:33:05.920 [2024-07-12 13:41:03.214199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.214225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.220733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fbcf0 00:33:05.920 [2024-07-12 13:41:03.221555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.221582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.233024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ed4e8 00:33:05.920 [2024-07-12 13:41:03.233996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.234021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.920 [2024-07-12 13:41:03.245392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb048 00:33:05.920 [2024-07-12 13:41:03.246510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.920 [2024-07-12 13:41:03.246537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.257710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ea680 00:33:05.921 [2024-07-12 13:41:03.258978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.259004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.270036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fa7d8 00:33:05.921 [2024-07-12 13:41:03.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.271521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.279845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e5658 00:33:05.921 [2024-07-12 13:41:03.280531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.280558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.291822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f4f40 00:33:05.921 [2024-07-12 13:41:03.292891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.292933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.303686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ddc00 00:33:05.921 [2024-07-12 13:41:03.304899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.304940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.315746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb048 00:33:05.921 [2024-07-12 13:41:03.316853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.316895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.329169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f9f68 00:33:05.921 [2024-07-12 13:41:03.330713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.330739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.340142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e6738 00:33:05.921 [2024-07-12 13:41:03.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.341355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.352047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190dece0 00:33:05.921 [2024-07-12 13:41:03.353070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.353098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.364044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f81e0 00:33:05.921 [2024-07-12 13:41:03.365395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.365437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.377345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e1f80 00:33:05.921 [2024-07-12 13:41:03.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.379199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.921 [2024-07-12 13:41:03.385848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f6458 00:33:05.921 [2024-07-12 13:41:03.386700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.921 [2024-07-12 13:41:03.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.178 [2024-07-12 13:41:03.397923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e0a68 00:33:06.178 [2024-07-12 13:41:03.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.178 [2024-07-12 13:41:03.398848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.178 [2024-07-12 13:41:03.408856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f8618 00:33:06.178 [2024-07-12 13:41:03.409657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.178 [2024-07-12 13:41:03.409683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.178 [2024-07-12 13:41:03.421169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eb760 00:33:06.178 [2024-07-12 13:41:03.422130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.422156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.434307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e0ea0 00:33:06.179 [2024-07-12 13:41:03.435607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.435650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.445353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f81e0 00:33:06.179 [2024-07-12 13:41:03.446487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.446513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.458533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e95a0 00:33:06.179 [2024-07-12 13:41:03.459927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.459952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.470769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f6020 00:33:06.179 [2024-07-12 13:41:03.472175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.472200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.480616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e99d8 00:33:06.179 [2024-07-12 13:41:03.481378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.481405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.492878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4578 00:33:06.179 [2024-07-12 13:41:03.493841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.493870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.505104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e88f8 00:33:06.179 [2024-07-12 13:41:03.506266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.506294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.518690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f5378 00:33:06.179 [2024-07-12 13:41:03.520555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.520584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.526997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eea00 00:33:06.179 [2024-07-12 13:41:03.527848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.527876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.538171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ed4e8 00:33:06.179 [2024-07-12 13:41:03.539019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.539044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.551448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ecc78 00:33:06.179 [2024-07-12 13:41:03.552526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.552554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.563639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ee5c8 00:33:06.179 [2024-07-12 13:41:03.564779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.564805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.574701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4578 00:33:06.179 [2024-07-12 13:41:03.575802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.575827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.587079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f7970 00:33:06.179 [2024-07-12 13:41:03.588332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.598031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ef6a8 00:33:06.179 [2024-07-12 13:41:03.598902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.598928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.609888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fe2e8 00:33:06.179 [2024-07-12 13:41:03.610636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.610664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.622155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e6b70 00:33:06.179 [2024-07-12 13:41:03.623080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.623107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.634206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fd208 00:33:06.179 [2024-07-12 13:41:03.635449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.179 [2024-07-12 13:41:03.646049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fac10 00:33:06.179 [2024-07-12 13:41:03.647295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.179 [2024-07-12 13:41:03.647338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.659431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fbcf0 00:33:06.437 [2024-07-12 13:41:03.661151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.661177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.670402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e9e10 00:33:06.437 [2024-07-12 13:41:03.671798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.671823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.680864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4140 00:33:06.437 [2024-07-12 13:41:03.682784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.682811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.691122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e99d8 00:33:06.437 [2024-07-12 13:41:03.691971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.691997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.703462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e73e0 00:33:06.437 [2024-07-12 13:41:03.704472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.704499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.715678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ef270 00:33:06.437 [2024-07-12 13:41:03.716833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.716858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.727948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ea248 00:33:06.437 [2024-07-12 13:41:03.729224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.729249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.738884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f81e0 00:33:06.437 [2024-07-12 13:41:03.739795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.739821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.750623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb048 00:33:06.437 [2024-07-12 13:41:03.751434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.751461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.764264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e0630 00:33:06.437 [2024-07-12 13:41:03.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.765915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.775386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f9f68 00:33:06.437 [2024-07-12 13:41:03.776565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.776593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.787244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f8618 00:33:06.437 [2024-07-12 13:41:03.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.437 [2024-07-12 13:41:03.788370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.437 [2024-07-12 13:41:03.800798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e6b70 00:33:06.437 [2024-07-12 13:41:03.802636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.802677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.809137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ff3c8 00:33:06.438 [2024-07-12 13:41:03.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.809996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.821298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fc128 00:33:06.438 [2024-07-12 13:41:03.822294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.822341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.833651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190df550 00:33:06.438 [2024-07-12 13:41:03.834757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.834784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.845856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4578 00:33:06.438 [2024-07-12 13:41:03.847054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.857814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eea00 00:33:06.438 [2024-07-12 13:41:03.859001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.859042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.869562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e84c0 00:33:06.438 [2024-07-12 13:41:03.870803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.870844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.881740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e49b0 00:33:06.438 [2024-07-12 13:41:03.882816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.893893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e0ea0 00:33:06.438 [2024-07-12 13:41:03.895337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.895364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.438 [2024-07-12 13:41:03.904700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e8d30 00:33:06.438 [2024-07-12 13:41:03.906712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.438 [2024-07-12 13:41:03.906740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.915127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e12d8 00:33:06.694 [2024-07-12 13:41:03.915968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.915993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.927460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ebb98 00:33:06.694 [2024-07-12 13:41:03.928496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.928522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.940544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fb8b8 00:33:06.694 [2024-07-12 13:41:03.941798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.941824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.952622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190df550 00:33:06.694 [2024-07-12 13:41:03.953892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.953922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.964925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190df118 00:33:06.694 [2024-07-12 13:41:03.966333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.966375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.974703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e8088 00:33:06.694 [2024-07-12 13:41:03.975486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.975514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.988162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ebb98 00:33:06.694 [2024-07-12 13:41:03.989781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:03.989806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:03.999163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ee5c8 00:33:06.694 [2024-07-12 13:41:04.000310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.000358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.010850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f0788 00:33:06.694 [2024-07-12 13:41:04.012099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.012141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.021768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190de470 00:33:06.694 [2024-07-12 13:41:04.022905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.035098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fac10 00:33:06.694 [2024-07-12 13:41:04.036482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.036510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.047051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f7970 00:33:06.694 [2024-07-12 13:41:04.048379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.058909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f8618 00:33:06.694 [2024-07-12 13:41:04.060269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.070991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f6cc8 00:33:06.694 [2024-07-12 13:41:04.072325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.072367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.082995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f5be8 00:33:06.694 [2024-07-12 13:41:04.084375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.084417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.094987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f57b0 00:33:06.694 [2024-07-12 13:41:04.096312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.694 [2024-07-12 13:41:04.096357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.694 [2024-07-12 13:41:04.106932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f3e60 00:33:06.694 [2024-07-12 13:41:04.108281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.108328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.695 [2024-07-12 13:41:04.118780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eaab8 00:33:06.695 [2024-07-12 13:41:04.120144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.120186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.695 [2024-07-12 13:41:04.130700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f96f8 00:33:06.695 [2024-07-12 13:41:04.132080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.132121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.695 [2024-07-12 13:41:04.141658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eb328 00:33:06.695 [2024-07-12 13:41:04.143455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.143483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.695 [2024-07-12 13:41:04.152677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f7100 00:33:06.695 [2024-07-12 13:41:04.153592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.153619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.695 [2024-07-12 13:41:04.164926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e88f8 00:33:06.695 [2024-07-12 13:41:04.166010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.695 [2024-07-12 13:41:04.166035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.177389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f31b8 00:33:06.952 [2024-07-12 13:41:04.178561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.178588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.189661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eea00 00:33:06.952 [2024-07-12 13:41:04.190923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.190949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.200810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e9168 00:33:06.952 [2024-07-12 13:41:04.202044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.202070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.213154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fc128 00:33:06.952 [2024-07-12 13:41:04.214553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.214581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.224258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f3e60 00:33:06.952 [2024-07-12 13:41:04.225258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.225298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.236154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ee190 00:33:06.952 [2024-07-12 13:41:04.237062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.237090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.248383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e5220 00:33:06.952 [2024-07-12 13:41:04.249448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.249475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.262053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e7c50 00:33:06.952 [2024-07-12 13:41:04.263879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.263905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.270448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fa7d8 00:33:06.952 [2024-07-12 13:41:04.271309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.271357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.282838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e8d30 00:33:06.952 [2024-07-12 13:41:04.283766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.283792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.293789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ef6a8 00:33:06.952 [2024-07-12 13:41:04.294779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.294805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.306059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ee190 00:33:06.952 [2024-07-12 13:41:04.307158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.307183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.318399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190edd58 00:33:06.952 [2024-07-12 13:41:04.319639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.329364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f2d80 00:33:06.952 [2024-07-12 13:41:04.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.330255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.341162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e3498 00:33:06.952 [2024-07-12 13:41:04.341996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.342037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.352091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f5be8 00:33:06.952 [2024-07-12 13:41:04.352926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.364421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fda78 00:33:06.952 [2024-07-12 13:41:04.365409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.365441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.376775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190dfdc0 00:33:06.952 [2024-07-12 13:41:04.377865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.377890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.389867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f9f68 00:33:06.952 [2024-07-12 13:41:04.391201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.391227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.402036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e2c28 00:33:06.952 [2024-07-12 13:41:04.403472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.403498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.952 [2024-07-12 13:41:04.412042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e5a90 00:33:06.952 [2024-07-12 13:41:04.412853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.952 [2024-07-12 13:41:04.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.424359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ecc78 00:33:07.210 [2024-07-12 13:41:04.425528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.425556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.435906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e9e10 00:33:07.210 [2024-07-12 13:41:04.437125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.448177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4578 00:33:07.210 [2024-07-12 13:41:04.449546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.449573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.460577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f6020 00:33:07.210 [2024-07-12 13:41:04.462062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.462088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.472896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f7da8 00:33:07.210 [2024-07-12 13:41:04.474545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.474572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.485061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e12d8 00:33:07.210 [2024-07-12 13:41:04.486855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.486881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.493365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e3d08 00:33:07.210 [2024-07-12 13:41:04.494141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.494166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.504459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ddc00 00:33:07.210 [2024-07-12 13:41:04.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.505220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.516824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f0bc0 00:33:07.210 [2024-07-12 13:41:04.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.210 [2024-07-12 13:41:04.517751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:07.210 [2024-07-12 13:41:04.529077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ed4e8 00:33:07.210 [2024-07-12 13:41:04.530193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.530219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.542262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fd640 00:33:07.211 [2024-07-12 13:41:04.543592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.543634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.554437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e73e0 00:33:07.211 [2024-07-12 13:41:04.555795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.555822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.564121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190eb328 00:33:07.211 [2024-07-12 13:41:04.564915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.564958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.575886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e4140 00:33:07.211 [2024-07-12 13:41:04.576642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.576685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.587669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f9f68 00:33:07.211 [2024-07-12 13:41:04.588475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.588502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.599644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ec840 00:33:07.211 [2024-07-12 13:41:04.600390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.600433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.612939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e95a0 00:33:07.211 [2024-07-12 13:41:04.614280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.614330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.625286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ef270 00:33:07.211 [2024-07-12 13:41:04.626877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.626905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.636432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f0ff8 00:33:07.211 [2024-07-12 13:41:04.637557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.637594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.648344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fcdd0 00:33:07.211 [2024-07-12 13:41:04.649380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.649408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.662258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fd640 00:33:07.211 [2024-07-12 13:41:04.664292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.664336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:07.211 [2024-07-12 13:41:04.670824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f2948 00:33:07.211 [2024-07-12 13:41:04.671610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.211 [2024-07-12 13:41:04.671662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.682964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f3a28 00:33:07.469 [2024-07-12 13:41:04.683784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.683812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.697332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f20d8 00:33:07.469 [2024-07-12 13:41:04.698862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.698890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.708359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f31b8 00:33:07.469 [2024-07-12 13:41:04.709806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.709832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.720421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fe720 00:33:07.469 [2024-07-12 13:41:04.722068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.722094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.731229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f81e0 00:33:07.469 [2024-07-12 13:41:04.732496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.741929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e7c50 00:33:07.469 [2024-07-12 13:41:04.743614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.743641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.752720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190feb58 00:33:07.469 [2024-07-12 13:41:04.753545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.753572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.764801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190e99d8 00:33:07.469 [2024-07-12 13:41:04.765751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.765777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.469 [2024-07-12 13:41:04.775922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fef90 00:33:07.469 [2024-07-12 13:41:04.776830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.469 [2024-07-12 13:41:04.776856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.788146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f9b30 00:33:07.470 [2024-07-12 13:41:04.789181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.789206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.801196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190df550 00:33:07.470 [2024-07-12 13:41:04.802506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.802534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.813471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190fc998 00:33:07.470 [2024-07-12 13:41:04.814833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.814859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.825829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f20d8 00:33:07.470 [2024-07-12 13:41:04.827335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.827376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.835636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190ef6a8 00:33:07.470 [2024-07-12 13:41:04.836487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.836530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.847845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f5378 00:33:07.470 [2024-07-12 13:41:04.848850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.848878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:07.470 [2024-07-12 13:41:04.861509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f679f0) with pdu=0x2000190f7da8 00:33:07.470 [2024-07-12 13:41:04.863377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.470 [2024-07-12 13:41:04.863419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:07.470 00:33:07.470 Latency(us) 00:33:07.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.470 nvme0n1 : 2.00 21577.79 84.29 0.00 0.00 5925.11 2524.35 14854.83 00:33:07.470 =================================================================================================================== 00:33:07.470 Total : 21577.79 84.29 0.00 0.00 5925.11 2524.35 14854.83 00:33:07.470 0 00:33:07.470 13:41:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:07.470 13:41:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:07.470 13:41:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:07.470 13:41:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:07.470 | .driver_specific 00:33:07.470 | .nvme_error 00:33:07.470 | .status_code 00:33:07.470 | .command_transient_transport_error' 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3724955 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3724955 ']' 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3724955 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724955 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724955' 00:33:07.728 killing process with pid 3724955 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3724955 00:33:07.728 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.728 00:33:07.728 Latency(us) 00:33:07.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.728 =================================================================================================================== 00:33:07.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.728 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3724955 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3725359 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3725359 /var/tmp/bperf.sock 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3725359 ']' 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.986 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:07.986 [2024-07-12 13:41:05.432768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:07.986 [2024-07-12 13:41:05.432846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725359 ] 00:33:07.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.986 Zero copy mechanism will not be used. 00:33:08.244 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.244 [2024-07-12 13:41:05.464026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:08.244 [2024-07-12 13:41:05.491356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.244 [2024-07-12 13:41:05.576430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.244 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.244 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:08.244 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.244 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.502 13:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.068 nvme0n1 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:09.068 13:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.068 Zero copy mechanism will not be used. 00:33:09.068 Running I/O for 2 seconds... 00:33:09.327 [2024-07-12 13:41:06.550701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.551061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.551112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.562826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.563178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.563220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.575462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.575848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.575876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.587554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.587935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.587963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.600184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.600547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.600575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.613077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.613352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.624821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.625216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.625244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.634861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.635415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.635442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.645946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.646449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.646476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.657664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.658201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.658229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.669339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.669702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.669730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.681472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.681885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.693184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.693649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.693677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.704165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.704655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.716095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.716510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.716538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.327 [2024-07-12 13:41:06.727882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.327 [2024-07-12 13:41:06.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.327 [2024-07-12 13:41:06.728454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.739811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.328 [2024-07-12 13:41:06.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.328 [2024-07-12 13:41:06.740358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.751733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.328 [2024-07-12 13:41:06.752215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.328 [2024-07-12 13:41:06.752241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.764354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.328 [2024-07-12 13:41:06.764734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.328 [2024-07-12 13:41:06.764766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.775771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.328 [2024-07-12 13:41:06.776205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.328 [2024-07-12 13:41:06.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.787520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.328 [2024-07-12 13:41:06.787958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.328 [2024-07-12 13:41:06.788001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.328 [2024-07-12 13:41:06.799016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.799392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.799421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.809039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.809465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.809493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.820883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.821274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.821324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.833441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.833831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.833859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.846230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.846640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.858102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.858516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.586 [2024-07-12 13:41:06.858544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.586 [2024-07-12 13:41:06.870129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.586 [2024-07-12 13:41:06.870588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.870617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.881941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.882429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.882470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.894598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.894937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.894978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.905605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.906126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.906153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.917212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.917650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.917677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.929064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.929476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.941822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.942268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.954094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.954575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.966269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.966750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.966778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.978362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.978711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.978738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:06.990882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:06.991233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:06.991260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:07.003280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:07.003810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:07.003851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:07.015567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:07.015977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:07.016004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:07.027730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:07.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:07.028166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:07.039394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:07.039892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:07.039919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.587 [2024-07-12 13:41:07.051982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.587 [2024-07-12 13:41:07.052446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.587 [2024-07-12 13:41:07.052488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.064074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.064536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.064564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.075717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.076089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.076121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.087461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.087938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.087965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.099462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.099940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.099982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.111029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.111422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.121942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.122419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.122446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.133383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.133765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.133792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.144651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.145029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.145056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.157081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.157834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.157861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.171564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.172061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.172090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.183124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.183708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.183741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.194368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.194838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.194871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.204456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.204898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.204930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.214795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.215306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.224293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.224808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.224839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.846 [2024-07-12 13:41:07.234394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.846 [2024-07-12 13:41:07.234910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.846 [2024-07-12 13:41:07.234941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.245251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.245706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.245748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.257212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.257627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.257654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.268851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.269304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.280109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.280551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.290652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.291106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.291133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.302153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.302479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.302507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.847 [2024-07-12 13:41:07.312834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:09.847 [2024-07-12 13:41:07.313258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.847 [2024-07-12 13:41:07.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.324792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.325203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.325231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.336866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.337291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.337341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.348782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.349138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.360556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.360945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.360972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.372197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.372588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.372620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.384285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.384650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.395721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.396084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.396111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.406970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.407325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.407353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.417593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.417959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.417986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.428869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.429287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.429313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.440151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.440604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.440647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.451617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.451937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.451964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.461468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.461854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.461881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.473115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.473478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.473506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.484560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.484939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.495543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.105 [2024-07-12 13:41:07.495911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.105 [2024-07-12 13:41:07.495953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.105 [2024-07-12 13:41:07.506626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.507086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.507112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.517898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.518333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.518360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.529049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.529434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.540237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.540662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.540704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.550836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.551148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.562146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.562486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.562518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.106 [2024-07-12 13:41:07.572991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.106 [2024-07-12 13:41:07.573441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.106 [2024-07-12 13:41:07.573470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.364 [2024-07-12 13:41:07.584740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.364 [2024-07-12 13:41:07.585153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.364 [2024-07-12 13:41:07.585194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.364 [2024-07-12 13:41:07.596992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.364 [2024-07-12 13:41:07.597384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.364 [2024-07-12 13:41:07.597413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.364 [2024-07-12 13:41:07.608566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.364 [2024-07-12 13:41:07.608942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.364 [2024-07-12 13:41:07.608968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.364 [2024-07-12 13:41:07.620425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.620714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.620742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.631187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.631593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.631635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.642411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.642846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.642873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.653434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.653786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.653812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.665150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.665578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.665605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.677301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.677711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.677738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.689153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.689521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.689548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.700601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.701083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.701110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.712328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.712763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.722982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.723405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.723433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.733891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.734364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.734391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.744418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.744797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.744838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.755112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.755541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.755568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.765964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.766330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.766357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.777526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.777878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.777904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.788233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.788533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.788560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.799399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.799858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.810672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.811000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.811026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.821507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.821863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.821890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.365 [2024-07-12 13:41:07.832081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.365 [2024-07-12 13:41:07.832407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.365 [2024-07-12 13:41:07.832433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.843194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.843692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.843719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.855365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.855757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.866781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.867253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.878538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.878906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.878933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.889908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.890352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.890379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.901077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.901497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.913539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.913979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.925491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.925879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.925906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.937156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.937595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.937623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.949416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.949780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.949807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.960860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.961183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.961210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.973281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.973714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.973743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.985134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.985628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:07.998190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:07.998535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:07.998563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:08.010190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:08.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:08.010726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:08.021811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:08.022287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:08.022336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.624 [2024-07-12 13:41:08.033957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.624 [2024-07-12 13:41:08.034325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.624 [2024-07-12 13:41:08.034353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.625 [2024-07-12 13:41:08.046024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.625 [2024-07-12 13:41:08.046395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.625 [2024-07-12 13:41:08.046423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.625 [2024-07-12 13:41:08.058160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.625 [2024-07-12 13:41:08.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.625 [2024-07-12 13:41:08.058523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.625 [2024-07-12 13:41:08.069867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.625 [2024-07-12 13:41:08.070331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.625 [2024-07-12 13:41:08.070359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.625 [2024-07-12 13:41:08.081699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.625 [2024-07-12 13:41:08.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.625 [2024-07-12 13:41:08.082046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.625 [2024-07-12 13:41:08.093020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.625 [2024-07-12 13:41:08.093603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.625 [2024-07-12 13:41:08.093638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.106118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.106586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.106631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.118600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.118954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.129687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.130137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.130178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.141237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.141698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.141726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.152576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.153146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.163952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.164470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.174675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.175198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.187215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.187643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.187670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.199081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.199585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.211083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.211480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.211507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.223494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.223830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.223870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.234708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.235116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.235142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.246567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.246976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.257672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.258035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.258061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.268969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.269397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.269424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.280030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.280463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.280505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.292059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.292519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.292547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.303727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.304073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.304100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.314988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.315410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.315439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.326960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.327349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.327378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.338371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.338726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.338753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.883 [2024-07-12 13:41:08.350325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:10.883 [2024-07-12 13:41:08.350800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.883 [2024-07-12 13:41:08.350827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.363216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.363714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.363746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.374988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.375407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.375435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.386188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.386585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.386612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.397380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.397847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.397874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.408274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.408738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.408765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.419930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.420244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.420271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.432148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.432589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.432617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.444121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.444573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.444601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.454939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.455414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.465632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.466134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.477998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.478425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.478453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.489965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.490312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.502067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.502463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.502491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.513927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.514230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.514257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.525838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.526274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.142 [2024-07-12 13:41:08.538869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f67d30) with pdu=0x2000190fef90 00:33:11.142 [2024-07-12 13:41:08.539151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.142 [2024-07-12 13:41:08.539178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.142 00:33:11.142 Latency(us) 00:33:11.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.142 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:11.142 nvme0n1 : 2.01 2668.95 333.62 0.00 0.00 5980.38 2293.76 15146.10 00:33:11.142 =================================================================================================================== 00:33:11.142 Total : 2668.95 333.62 0.00 0.00 5980.38 2293.76 15146.10 00:33:11.142 0 00:33:11.142 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:11.142 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:11.142 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:11.142 | .driver_specific 00:33:11.142 | .nvme_error 00:33:11.142 | .status_code 00:33:11.142 | .command_transient_transport_error' 00:33:11.142 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3725359 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3725359 ']' 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3725359 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3725359 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3725359' 00:33:11.400 killing process with pid 3725359 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3725359 00:33:11.400 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.400 00:33:11.400 Latency(us) 00:33:11.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.400 =================================================================================================================== 00:33:11.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.400 13:41:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3725359 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3723999 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3723999 ']' 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3723999 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723999 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723999' 00:33:11.658 killing process with pid 3723999 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3723999 00:33:11.658 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3723999 00:33:11.917 00:33:11.917 real 0m15.085s 00:33:11.917 user 0m30.269s 00:33:11.917 sys 0m3.906s 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:11.917 ************************************ 00:33:11.917 END TEST nvmf_digest_error 00:33:11.917 ************************************ 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:11.917 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:11.917 rmmod nvme_tcp 00:33:11.917 rmmod nvme_fabrics 00:33:11.917 rmmod nvme_keyring 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3723999 ']' 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3723999 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3723999 ']' 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3723999 00:33:12.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3723999) - No such process 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3723999 is not found' 00:33:12.176 Process with pid 3723999 is not found 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:12.176 13:41:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.081 13:41:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:14.081 00:33:14.081 real 0m34.924s 00:33:14.081 user 1m1.395s 00:33:14.081 sys 0m9.540s 00:33:14.081 13:41:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:14.081 13:41:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.081 ************************************ 00:33:14.081 END TEST nvmf_digest 00:33:14.081 ************************************ 00:33:14.081 13:41:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:14.081 13:41:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:14.081 13:41:11 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:14.081 13:41:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:14.081 13:41:11 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.081 13:41:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:14.081 13:41:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.082 13:41:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.082 ************************************ 00:33:14.082 START TEST nvmf_bdevperf 00:33:14.082 ************************************ 00:33:14.082 13:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.082 * Looking for test storage... 00:33:14.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.341 13:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:16.254 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:16.254 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:16.254 Found net devices under 0000:09:00.0: cvl_0_0 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:16.254 Found net devices under 0000:09:00.1: cvl_0_1 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.254 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.255 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.512 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.512 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:33:16.513 00:33:16.513 --- 10.0.0.2 ping statistics --- 00:33:16.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.513 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:33:16.513 00:33:16.513 --- 10.0.0.1 ping statistics --- 00:33:16.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.513 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3727830 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3727830 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3727830 ']' 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:16.513 13:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.513 [2024-07-12 13:41:13.836215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:16.513 [2024-07-12 13:41:13.836298] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.513 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.513 [2024-07-12 13:41:13.874264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:16.513 [2024-07-12 13:41:13.899896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.771 [2024-07-12 13:41:13.987012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.771 [2024-07-12 13:41:13.987072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.771 [2024-07-12 13:41:13.987085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.771 [2024-07-12 13:41:13.987097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.771 [2024-07-12 13:41:13.987106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.771 [2024-07-12 13:41:13.987163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.771 [2024-07-12 13:41:13.987220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.771 [2024-07-12 13:41:13.987223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 [2024-07-12 13:41:14.133191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 Malloc0 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.771 [2024-07-12 13:41:14.192948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:16.771 13:41:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:16.772 { 00:33:16.772 "params": { 00:33:16.772 "name": "Nvme$subsystem", 00:33:16.772 "trtype": "$TEST_TRANSPORT", 00:33:16.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.772 "adrfam": "ipv4", 00:33:16.772 "trsvcid": "$NVMF_PORT", 00:33:16.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.772 "hdgst": ${hdgst:-false}, 00:33:16.772 "ddgst": ${ddgst:-false} 00:33:16.772 }, 00:33:16.772 "method": "bdev_nvme_attach_controller" 00:33:16.772 } 00:33:16.772 EOF 00:33:16.772 )") 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:16.772 13:41:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:16.772 "params": { 00:33:16.772 "name": "Nvme1", 00:33:16.772 "trtype": "tcp", 00:33:16.772 "traddr": "10.0.0.2", 00:33:16.772 "adrfam": "ipv4", 00:33:16.772 "trsvcid": "4420", 00:33:16.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.772 "hdgst": false, 00:33:16.772 "ddgst": false 00:33:16.772 }, 00:33:16.772 "method": "bdev_nvme_attach_controller" 00:33:16.772 }' 00:33:16.772 [2024-07-12 13:41:14.241855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:16.772 [2024-07-12 13:41:14.241921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727853 ] 00:33:17.029 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.029 [2024-07-12 13:41:14.273128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:17.029 [2024-07-12 13:41:14.302164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.029 [2024-07-12 13:41:14.398333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.288 Running I/O for 1 seconds... 00:33:18.221 00:33:18.221 Latency(us) 00:33:18.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.221 Verification LBA range: start 0x0 length 0x4000 00:33:18.221 Nvme1n1 : 1.00 8638.94 33.75 0.00 0.00 14756.70 1328.92 18252.99 00:33:18.221 =================================================================================================================== 00:33:18.221 Total : 8638.94 33.75 0.00 0.00 14756.70 1328.92 18252.99 00:33:18.478 13:41:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3727992 00:33:18.478 13:41:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.479 { 00:33:18.479 "params": { 00:33:18.479 "name": "Nvme$subsystem", 00:33:18.479 "trtype": "$TEST_TRANSPORT", 00:33:18.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.479 "adrfam": "ipv4", 00:33:18.479 "trsvcid": "$NVMF_PORT", 00:33:18.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.479 "hdgst": ${hdgst:-false}, 00:33:18.479 "ddgst": ${ddgst:-false} 00:33:18.479 }, 00:33:18.479 "method": "bdev_nvme_attach_controller" 00:33:18.479 } 00:33:18.479 EOF 00:33:18.479 )") 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:18.479 13:41:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:18.479 "params": { 00:33:18.479 "name": "Nvme1", 00:33:18.479 "trtype": "tcp", 00:33:18.479 "traddr": "10.0.0.2", 00:33:18.479 "adrfam": "ipv4", 00:33:18.479 "trsvcid": "4420", 00:33:18.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.479 "hdgst": false, 00:33:18.479 "ddgst": false 00:33:18.479 }, 00:33:18.479 "method": "bdev_nvme_attach_controller" 00:33:18.479 }' 00:33:18.479 [2024-07-12 13:41:15.876187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:18.479 [2024-07-12 13:41:15.876260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727992 ] 00:33:18.479 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.479 [2024-07-12 13:41:15.907415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:18.479 [2024-07-12 13:41:15.936141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.737 [2024-07-12 13:41:16.023485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.994 Running I/O for 15 seconds... 00:33:21.524 13:41:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3727830 00:33:21.524 13:41:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:21.524 [2024-07-12 13:41:18.842892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.842936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.842985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.524 [2024-07-12 13:41:18.843290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.524 [2024-07-12 13:41:18.843312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.843981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.843995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.525 [2024-07-12 13:41:18.844539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.525 [2024-07-12 13:41:18.844554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.844987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.844999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.526 [2024-07-12 13:41:18.845202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.526 [2024-07-12 13:41:18.845227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.526 [2024-07-12 13:41:18.845252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.526 [2024-07-12 13:41:18.845277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.526 [2024-07-12 13:41:18.845329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.526 [2024-07-12 13:41:18.845719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.526 [2024-07-12 13:41:18.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.845975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.845988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.527 [2024-07-12 13:41:18.846627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.527 [2024-07-12 13:41:18.846654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8cd60 is same with the state(5) to be set 00:33:21.527 [2024-07-12 13:41:18.846697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:21.527 [2024-07-12 13:41:18.846708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:21.527 [2024-07-12 13:41:18.846718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44600 len:8 PRP1 0x0 PRP2 0x0 00:33:21.527 [2024-07-12 13:41:18.846730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.527 [2024-07-12 13:41:18.846783] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb8cd60 was disconnected and freed. reset controller. 00:33:21.527 [2024-07-12 13:41:18.849888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.527 [2024-07-12 13:41:18.849964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.527 [2024-07-12 13:41:18.850815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.527 [2024-07-12 13:41:18.850844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.527 [2024-07-12 13:41:18.850860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.527 [2024-07-12 13:41:18.851162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.527 [2024-07-12 13:41:18.851457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.527 [2024-07-12 13:41:18.851479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.527 [2024-07-12 13:41:18.851495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.527 [2024-07-12 13:41:18.855588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.527 [2024-07-12 13:41:18.864674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.527 [2024-07-12 13:41:18.865130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.527 [2024-07-12 13:41:18.865156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.527 [2024-07-12 13:41:18.865171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.527 [2024-07-12 13:41:18.865471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.865762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.865781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.865793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.869670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.878883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.879401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.879442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.879458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.879752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.879993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.880011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.880024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.883762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.892902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.893331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.893372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.893387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.893698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.893939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.893957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.893970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.897601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.906759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.907225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.907267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.907284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.907575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.907851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.907870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.907882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.911700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.920761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.921277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.921328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.921355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.921664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.921905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.921923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.921935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.925713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.934775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.935286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.935313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.935339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.935626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.935884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.935903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.935915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.939729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.948868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.949352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.949377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.949412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.949690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.949953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.949972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.949984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.953740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.962915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.963436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.963477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.963493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.963803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.964044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.964062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.964074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.967836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.977049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.977787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.977825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.977856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.978117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.978407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.978429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.978442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.528 [2024-07-12 13:41:18.982184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.528 [2024-07-12 13:41:18.991411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.528 [2024-07-12 13:41:18.991980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.528 [2024-07-12 13:41:18.992008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.528 [2024-07-12 13:41:18.992039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.528 [2024-07-12 13:41:18.992346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.528 [2024-07-12 13:41:18.992621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.528 [2024-07-12 13:41:18.992647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.528 [2024-07-12 13:41:18.992677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.787 [2024-07-12 13:41:18.996842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.787 [2024-07-12 13:41:19.005789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.787 [2024-07-12 13:41:19.006313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.006362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.006377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.006698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.006939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.006958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.006970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.010837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.019785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.020305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.020353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.020369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.020665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.020906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.020924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.020937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.024686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.033824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.034416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.034459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.034476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.034774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.035016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.035034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.035046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.038822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.047680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.048199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.048241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.048258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.048569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.048844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.048862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.048874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.052676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.061622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.062144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.062172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.062203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.062515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.062793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.062812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.062824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.066592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.075672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.076142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.076184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.076200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.076518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.076781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.076800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.076812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.080496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.089595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.090107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.090149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.090165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.090484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.090770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.090788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.090800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.094546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.104454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.104973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.105011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.105029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.105329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.105638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.105659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.105672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.109892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.118726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.119243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.119285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.119301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.119606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.119866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.119885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.119897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.123768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.132934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.133387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.133415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.133430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.133731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.133972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.133990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.134007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.788 [2024-07-12 13:41:19.137799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.788 [2024-07-12 13:41:19.146777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.788 [2024-07-12 13:41:19.147244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.788 [2024-07-12 13:41:19.147287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.788 [2024-07-12 13:41:19.147303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.788 [2024-07-12 13:41:19.147583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.788 [2024-07-12 13:41:19.147841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.788 [2024-07-12 13:41:19.147860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.788 [2024-07-12 13:41:19.147871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.151691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.160912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.161363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.161405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.161421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.161724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.161964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.161983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.161995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.165668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.174785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.175298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.175332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.175366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.175665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.175906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.175924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.175936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.179563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.188679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.189099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.189144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.189159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.189456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.189739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.189758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.189770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.193515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.202689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.203113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.203153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.203168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.203457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.203721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.203739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.203751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.207355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.216697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.217373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.217412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.217444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.217706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.217948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.217966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.217978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.221578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.230551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.231005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.231047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.231063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.231387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.231643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.231677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.231690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.235451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.244512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.789 [2024-07-12 13:41:19.244987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.789 [2024-07-12 13:41:19.245030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:21.789 [2024-07-12 13:41:19.245046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:21.789 [2024-07-12 13:41:19.245369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:21.789 [2024-07-12 13:41:19.245618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.789 [2024-07-12 13:41:19.245651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.789 [2024-07-12 13:41:19.245663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.789 [2024-07-12 13:41:19.249421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.789 [2024-07-12 13:41:19.259056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.048 [2024-07-12 13:41:19.259530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-12 13:41:19.259558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.048 [2024-07-12 13:41:19.259574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.048 [2024-07-12 13:41:19.259867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.048 [2024-07-12 13:41:19.260115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.048 [2024-07-12 13:41:19.260134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.048 [2024-07-12 13:41:19.260146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.048 [2024-07-12 13:41:19.264283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.048 [2024-07-12 13:41:19.272932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.048 [2024-07-12 13:41:19.273590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-12 13:41:19.273629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.048 [2024-07-12 13:41:19.273662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.048 [2024-07-12 13:41:19.273961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.048 [2024-07-12 13:41:19.274225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.048 [2024-07-12 13:41:19.274244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.048 [2024-07-12 13:41:19.274256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.048 [2024-07-12 13:41:19.278079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.048 [2024-07-12 13:41:19.286807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.048 [2024-07-12 13:41:19.287266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.048 [2024-07-12 13:41:19.287294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.048 [2024-07-12 13:41:19.287310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.048 [2024-07-12 13:41:19.287645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.048 [2024-07-12 13:41:19.287887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.048 [2024-07-12 13:41:19.287905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.048 [2024-07-12 13:41:19.287917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.291695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.300801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.301374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.301402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.301417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.301680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.301938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.301956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.301968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.305721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.314866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.315385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.315427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.315443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.315745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.315986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.316004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.316017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.319758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.328749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.329157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.329184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.329204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.329504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.329768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.329787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.329799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.333542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.342733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.343205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.343247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.343263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.343578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.343838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.343857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.343869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.347654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.357116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.357571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.357615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.357631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.357924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.358166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.358185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.358197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.362110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.371273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.371937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.371994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.372028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.372342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.372610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.372656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.372669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.376435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.385598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.386041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.386083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.386099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.386401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.386665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.386684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.386696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.390467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.399799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.400325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.400370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.400386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.400697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.400939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.400957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.400969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.404751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.413870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.414458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.414486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.414501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.414798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.415039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.415058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.415070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.418878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.427935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.428406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.428434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.428466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.428753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.428994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.429013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.049 [2024-07-12 13:41:19.429026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.049 [2024-07-12 13:41:19.432734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.049 [2024-07-12 13:41:19.442064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.049 [2024-07-12 13:41:19.442491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.049 [2024-07-12 13:41:19.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.049 [2024-07-12 13:41:19.442547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.049 [2024-07-12 13:41:19.442859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.049 [2024-07-12 13:41:19.443100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.049 [2024-07-12 13:41:19.443118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.443130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.446918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.050 [2024-07-12 13:41:19.456243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.050 [2024-07-12 13:41:19.456694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-12 13:41:19.456721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.050 [2024-07-12 13:41:19.456738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.050 [2024-07-12 13:41:19.457033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.050 [2024-07-12 13:41:19.457275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.050 [2024-07-12 13:41:19.457294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.457306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.461089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.050 [2024-07-12 13:41:19.470250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.050 [2024-07-12 13:41:19.470745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-12 13:41:19.470771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.050 [2024-07-12 13:41:19.470786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.050 [2024-07-12 13:41:19.471091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.050 [2024-07-12 13:41:19.471375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.050 [2024-07-12 13:41:19.471395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.471408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.475196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.050 [2024-07-12 13:41:19.484177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.050 [2024-07-12 13:41:19.484783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-12 13:41:19.484837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.050 [2024-07-12 13:41:19.484856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.050 [2024-07-12 13:41:19.485157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.050 [2024-07-12 13:41:19.485449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.050 [2024-07-12 13:41:19.485470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.485483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.489266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.050 [2024-07-12 13:41:19.498241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.050 [2024-07-12 13:41:19.498685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-12 13:41:19.498713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.050 [2024-07-12 13:41:19.498729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.050 [2024-07-12 13:41:19.499010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.050 [2024-07-12 13:41:19.499251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.050 [2024-07-12 13:41:19.499270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.499282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.503070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.050 [2024-07-12 13:41:19.512378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.050 [2024-07-12 13:41:19.512900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.050 [2024-07-12 13:41:19.512927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.050 [2024-07-12 13:41:19.512958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.050 [2024-07-12 13:41:19.513253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.050 [2024-07-12 13:41:19.513525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.050 [2024-07-12 13:41:19.513545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.050 [2024-07-12 13:41:19.513563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.050 [2024-07-12 13:41:19.517670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.309 [2024-07-12 13:41:19.526766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.309 [2024-07-12 13:41:19.527325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.309 [2024-07-12 13:41:19.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.309 [2024-07-12 13:41:19.527383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.309 [2024-07-12 13:41:19.527682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.309 [2024-07-12 13:41:19.527923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.309 [2024-07-12 13:41:19.527942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.309 [2024-07-12 13:41:19.527954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.309 [2024-07-12 13:41:19.531726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.309 [2024-07-12 13:41:19.540839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.309 [2024-07-12 13:41:19.541353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.309 [2024-07-12 13:41:19.541381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.309 [2024-07-12 13:41:19.541397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.309 [2024-07-12 13:41:19.541727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.309 [2024-07-12 13:41:19.541969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.309 [2024-07-12 13:41:19.541987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.309 [2024-07-12 13:41:19.542000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.309 [2024-07-12 13:41:19.545727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.309 [2024-07-12 13:41:19.555102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.309 [2024-07-12 13:41:19.555583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.309 [2024-07-12 13:41:19.555626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.309 [2024-07-12 13:41:19.555641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.555953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.556215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.556234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.556247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.560056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.569013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.569516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.569549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.569581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.569880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.570121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.570140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.570152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.573791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.583083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.583561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.583620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.583919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.584160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.584178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.584190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.588047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.597061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.597674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.597727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.597746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.598054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.598341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.598376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.598390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.602726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.611199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.611853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.611912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.611928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.612216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.612514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.612535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.612561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.616415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.625496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.626152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.626216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.626231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.626547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.626827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.626845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.626857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.630709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.639786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.640325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.640352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.640383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.640679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.640920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.640938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.640950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.644578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.653778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.654302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.654353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.654370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.654683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.654925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.654943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.654955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.658727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.667618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.668083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.310 [2024-07-12 13:41:19.668108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.310 [2024-07-12 13:41:19.668138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.310 [2024-07-12 13:41:19.668431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.310 [2024-07-12 13:41:19.668694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.310 [2024-07-12 13:41:19.668713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.310 [2024-07-12 13:41:19.668725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.310 [2024-07-12 13:41:19.672371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.310 [2024-07-12 13:41:19.681642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.310 [2024-07-12 13:41:19.682158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.682184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.682214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.682525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.682803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.682822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.682834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.686574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.695524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.695952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.695993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.696008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.696328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.696606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.696626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.696639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.700452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.709384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.709852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.709878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.709914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.710193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.710480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.710501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.710514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.714275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.723403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.723866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.723892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.723922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.724201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.724495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.724516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.724543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.728283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.737452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.737915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.737955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.737971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.738268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.738559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.738580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.738593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.742334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.751480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.752046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.752107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.752121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.752420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.752669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.752706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.752719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.756363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.765502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.766001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.766047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.766061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.766359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.311 [2024-07-12 13:41:19.766607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.311 [2024-07-12 13:41:19.766640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.311 [2024-07-12 13:41:19.766652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.311 [2024-07-12 13:41:19.770273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.311 [2024-07-12 13:41:19.779887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.311 [2024-07-12 13:41:19.780429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.311 [2024-07-12 13:41:19.780458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.311 [2024-07-12 13:41:19.780473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.311 [2024-07-12 13:41:19.780756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.781079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.781099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.781126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.784980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.793925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.794367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.794409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.794425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.794731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.794972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.794991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.795003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.798776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.807971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.808432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.808458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.808473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.808750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.808991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.809009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.809021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.812772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.822067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.822573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.822615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.822630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.822903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.823144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.823162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.823174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.826946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.836220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.836677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.836718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.836733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.837052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.837309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.837353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.837366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.841145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.850326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.850803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.850830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.850846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.851148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.851458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.851480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.851494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.855936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.864677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.865199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.865242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.865258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.865571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.865828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.865847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.865858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.869716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.878690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.879355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.879395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.879413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.879734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.879975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.879994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.880006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.883758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.892526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.892954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.892997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.893012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.893336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.893608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.893628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.893646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.897398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.906544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.571 [2024-07-12 13:41:19.907145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.571 [2024-07-12 13:41:19.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.571 [2024-07-12 13:41:19.907214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.571 [2024-07-12 13:41:19.907536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.571 [2024-07-12 13:41:19.907814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.571 [2024-07-12 13:41:19.907833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.571 [2024-07-12 13:41:19.907845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.571 [2024-07-12 13:41:19.911596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.571 [2024-07-12 13:41:19.920492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.921097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.921148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.921166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.921467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.921751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.921770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.921782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.925529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:19.934484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.934937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.934963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.934993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.935270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.935562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.935582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.935596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.939386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:19.948558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.948989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.949017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.949033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.949313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.949608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.949628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.949641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.953444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:19.962491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.963020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.963062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.963078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.963404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.963694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.963713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.963724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.967325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:19.976449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.976898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.976924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.976939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.977216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.977493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.977513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.977526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.981244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:19.990525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:19.990991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:19.991017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:19.991031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:19.991309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:19.991610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:19.991644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:19.991656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:19.995396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:20.005209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:20.005735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:20.005764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:20.005780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:20.006056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:20.006313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:20.006345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:20.006358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:20.010187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:20.019438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:20.020041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:20.020098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:20.020114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:20.020419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:20.020703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:20.020722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:20.020735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:20.024617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.572 [2024-07-12 13:41:20.033538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.572 [2024-07-12 13:41:20.034252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.572 [2024-07-12 13:41:20.034321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.572 [2024-07-12 13:41:20.034359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.572 [2024-07-12 13:41:20.034638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.572 [2024-07-12 13:41:20.034900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.572 [2024-07-12 13:41:20.034921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.572 [2024-07-12 13:41:20.034937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.572 [2024-07-12 13:41:20.038884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.047959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.048552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.048581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.048597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.048897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.049138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.049157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.049169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.053636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.062649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.063252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.063304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.063330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.063625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.063892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.063911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.063924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.067856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.076950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.077436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.077465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.077482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.077787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.078052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.078072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.078085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.081961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.091096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.091557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.091584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.091608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.091886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.092128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.092146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.092159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.095931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.105202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.105690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.105719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.105735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.106030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.106309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.106342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.106370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.110757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.119310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.119808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.119835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.119850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.120148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.120436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.120457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.120469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.124389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.133357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.133830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.133857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.133871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.134148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.134437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.134463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.134477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.138273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.147436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.147971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.147999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.148014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.148312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.148585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.148603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.148616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.152459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.161756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.162196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.162222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.162237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.162534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.162810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.162829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.162841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.166818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.176001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.176515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-07-12 13:41:20.176544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.832 [2024-07-12 13:41:20.176560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.832 [2024-07-12 13:41:20.176861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.832 [2024-07-12 13:41:20.177102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.832 [2024-07-12 13:41:20.177120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.832 [2024-07-12 13:41:20.177132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.832 [2024-07-12 13:41:20.181012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.832 [2024-07-12 13:41:20.190119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.832 [2024-07-12 13:41:20.190562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.190588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.190603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.190889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.191137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.191156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.191168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.195048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.204297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.204739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.204765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.204780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.205058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.205299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.205339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.205354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.209200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.218591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.219040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.219067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.219083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.219403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.219682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.219716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.219728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.223570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.232851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.233258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.233284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.233312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.233640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.233918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.233936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.233948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.237812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.247134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.247570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.247597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.247612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.247904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.248159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.248178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.248191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.252071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.261375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.261808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.261835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.261849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.262128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.262411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.262431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.262444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.266255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.275605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.276068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.276095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.276110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.276421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.276706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.276724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.276741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.280568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.833 [2024-07-12 13:41:20.289839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.833 [2024-07-12 13:41:20.290280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-07-12 13:41:20.290328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:22.833 [2024-07-12 13:41:20.290345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:22.833 [2024-07-12 13:41:20.290644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:22.833 [2024-07-12 13:41:20.290885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.833 [2024-07-12 13:41:20.290903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.833 [2024-07-12 13:41:20.290915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.833 [2024-07-12 13:41:20.294775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.092 [2024-07-12 13:41:20.304577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.092 [2024-07-12 13:41:20.305034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.092 [2024-07-12 13:41:20.305060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.092 [2024-07-12 13:41:20.305075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.092 [2024-07-12 13:41:20.305387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.092 [2024-07-12 13:41:20.305658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.092 [2024-07-12 13:41:20.305678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.092 [2024-07-12 13:41:20.305704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.092 [2024-07-12 13:41:20.309842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.092 [2024-07-12 13:41:20.318686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.092 [2024-07-12 13:41:20.319128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.092 [2024-07-12 13:41:20.319155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.092 [2024-07-12 13:41:20.319170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.092 [2024-07-12 13:41:20.319483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.092 [2024-07-12 13:41:20.319763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.092 [2024-07-12 13:41:20.319782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.092 [2024-07-12 13:41:20.319794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.092 [2024-07-12 13:41:20.323638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.092 [2024-07-12 13:41:20.332848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.092 [2024-07-12 13:41:20.333256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.092 [2024-07-12 13:41:20.333282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.092 [2024-07-12 13:41:20.333296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.092 [2024-07-12 13:41:20.333606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.092 [2024-07-12 13:41:20.333865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.092 [2024-07-12 13:41:20.333883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.092 [2024-07-12 13:41:20.333895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.092 [2024-07-12 13:41:20.337755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.092 [2024-07-12 13:41:20.347194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.092 [2024-07-12 13:41:20.347635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.092 [2024-07-12 13:41:20.347678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.092 [2024-07-12 13:41:20.347693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.347971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.348212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.348230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.348242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.352132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.361412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.361886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.361914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.361930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.362224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.362541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.362563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.362577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.367083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.375802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.376244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.376270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.376285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.376613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.376860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.376878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.376891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.380833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.389882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.390324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.390367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.390383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.390703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.390944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.390962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.390974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.394808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.404075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.404499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.404525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.404540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.404816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.405057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.405075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.405087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.408951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.418370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.418881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.418908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.418923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.419222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.419513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.419533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.419546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.423397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.432674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.433115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.433142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.433157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.433470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.433751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.433769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.433781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.437598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.446905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.447310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.447358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.447374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.447673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.447914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.447932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.447944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.451846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.461179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.461617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.461658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.461673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.461951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.462192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.462210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.462222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.466085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.475493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.475950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.475977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.475997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.476295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.476586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.476606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.476619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.480506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.489741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.093 [2024-07-12 13:41:20.490190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.093 [2024-07-12 13:41:20.490224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.093 [2024-07-12 13:41:20.490239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.093 [2024-07-12 13:41:20.490559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.093 [2024-07-12 13:41:20.490820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.093 [2024-07-12 13:41:20.490839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.093 [2024-07-12 13:41:20.490851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.093 [2024-07-12 13:41:20.494691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.093 [2024-07-12 13:41:20.503976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.094 [2024-07-12 13:41:20.504415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.094 [2024-07-12 13:41:20.504441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.094 [2024-07-12 13:41:20.504456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.094 [2024-07-12 13:41:20.504736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.094 [2024-07-12 13:41:20.504976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.094 [2024-07-12 13:41:20.504995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.094 [2024-07-12 13:41:20.505007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.094 [2024-07-12 13:41:20.508847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.094 [2024-07-12 13:41:20.518278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.094 [2024-07-12 13:41:20.518728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.094 [2024-07-12 13:41:20.518755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.094 [2024-07-12 13:41:20.518770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.094 [2024-07-12 13:41:20.519068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.094 [2024-07-12 13:41:20.519334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.094 [2024-07-12 13:41:20.519374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.094 [2024-07-12 13:41:20.519396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.094 [2024-07-12 13:41:20.523226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.094 [2024-07-12 13:41:20.532665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.094 [2024-07-12 13:41:20.533127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.094 [2024-07-12 13:41:20.533153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.094 [2024-07-12 13:41:20.533168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.094 [2024-07-12 13:41:20.533474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.094 [2024-07-12 13:41:20.533773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.094 [2024-07-12 13:41:20.533794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.094 [2024-07-12 13:41:20.533806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.094 [2024-07-12 13:41:20.537969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.094 [2024-07-12 13:41:20.546878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.094 [2024-07-12 13:41:20.547342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.094 [2024-07-12 13:41:20.547382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.094 [2024-07-12 13:41:20.547398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.094 [2024-07-12 13:41:20.547713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.094 [2024-07-12 13:41:20.547955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.094 [2024-07-12 13:41:20.547973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.094 [2024-07-12 13:41:20.547985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.094 [2024-07-12 13:41:20.551896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.094 [2024-07-12 13:41:20.561329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.094 [2024-07-12 13:41:20.561839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.094 [2024-07-12 13:41:20.561870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.094 [2024-07-12 13:41:20.561885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.094 [2024-07-12 13:41:20.562169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.094 [2024-07-12 13:41:20.562478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.094 [2024-07-12 13:41:20.562499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.094 [2024-07-12 13:41:20.562512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.353 [2024-07-12 13:41:20.566578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.353 [2024-07-12 13:41:20.575639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.353 [2024-07-12 13:41:20.576084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.353 [2024-07-12 13:41:20.576110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.353 [2024-07-12 13:41:20.576125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.353 [2024-07-12 13:41:20.576412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.353 [2024-07-12 13:41:20.576654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.353 [2024-07-12 13:41:20.576672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.353 [2024-07-12 13:41:20.576683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.353 [2024-07-12 13:41:20.580550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.353 [2024-07-12 13:41:20.589910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.353 [2024-07-12 13:41:20.590367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.353 [2024-07-12 13:41:20.590396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.353 [2024-07-12 13:41:20.590412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.353 [2024-07-12 13:41:20.590714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.353 [2024-07-12 13:41:20.590956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.353 [2024-07-12 13:41:20.590976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.353 [2024-07-12 13:41:20.590987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.353 [2024-07-12 13:41:20.594879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.604186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.604669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.604696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.604712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.605009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.605251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.605270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.605282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.609161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.618534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.618975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.619002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.619017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.619310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.619608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.619629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.619642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.624101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.632913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.633378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.633407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.633422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.633711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.633952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.633970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.633982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.637994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.647202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.647686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.647713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.647728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.648025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.648276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.648294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.648329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.652266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.661354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.661836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.661863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.661877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.662175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.662488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.662509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.662527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.666420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.675594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.676051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.676078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.676093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.676386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.676649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.676667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.676679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.680530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.689859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.690280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.690330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.690347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.690636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.690893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.690911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.690924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.694785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.704048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.704550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.704578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.704594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.704905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.705146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.705164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.705176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.709044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.718408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.718878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.718905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.718920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.719216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.719505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.719526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.719539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.723400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.732849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.733288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.354 [2024-07-12 13:41:20.733341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.354 [2024-07-12 13:41:20.733640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.354 [2024-07-12 13:41:20.733896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.354 [2024-07-12 13:41:20.733915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.354 [2024-07-12 13:41:20.733927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.354 [2024-07-12 13:41:20.737798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.354 [2024-07-12 13:41:20.747172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.354 [2024-07-12 13:41:20.747654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.354 [2024-07-12 13:41:20.747681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.747696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.747992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.748233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.748251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.748263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.752203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.355 [2024-07-12 13:41:20.761354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.355 [2024-07-12 13:41:20.761836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.355 [2024-07-12 13:41:20.761862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.761877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.762155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.762447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.762468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.762480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.766406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.355 [2024-07-12 13:41:20.775438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.355 [2024-07-12 13:41:20.775897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.355 [2024-07-12 13:41:20.775923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.775937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.776216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.776504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.776525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.776537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.780379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.355 [2024-07-12 13:41:20.789602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.355 [2024-07-12 13:41:20.790056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.355 [2024-07-12 13:41:20.790082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.790097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.790398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.790656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.790676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.790703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.794530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.355 [2024-07-12 13:41:20.803767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.355 [2024-07-12 13:41:20.804159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.355 [2024-07-12 13:41:20.804185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.804199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.804474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.804742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.804760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.804772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.808630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.355 [2024-07-12 13:41:20.818068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.355 [2024-07-12 13:41:20.818515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.355 [2024-07-12 13:41:20.818542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.355 [2024-07-12 13:41:20.818557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.355 [2024-07-12 13:41:20.818855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.355 [2024-07-12 13:41:20.819096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.355 [2024-07-12 13:41:20.819114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.355 [2024-07-12 13:41:20.819126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.355 [2024-07-12 13:41:20.823409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.614 [2024-07-12 13:41:20.832645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.614 [2024-07-12 13:41:20.833047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.614 [2024-07-12 13:41:20.833073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.614 [2024-07-12 13:41:20.833088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.614 [2024-07-12 13:41:20.833379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.614 [2024-07-12 13:41:20.833652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.614 [2024-07-12 13:41:20.833686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.614 [2024-07-12 13:41:20.833698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.614 [2024-07-12 13:41:20.837535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.614 [2024-07-12 13:41:20.846915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.614 [2024-07-12 13:41:20.847320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.614 [2024-07-12 13:41:20.847370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.614 [2024-07-12 13:41:20.847385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.614 [2024-07-12 13:41:20.847683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.614 [2024-07-12 13:41:20.847924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.614 [2024-07-12 13:41:20.847942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.614 [2024-07-12 13:41:20.847953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.614 [2024-07-12 13:41:20.851885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.614 [2024-07-12 13:41:20.861213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.614 [2024-07-12 13:41:20.861639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.614 [2024-07-12 13:41:20.861665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.614 [2024-07-12 13:41:20.861685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.614 [2024-07-12 13:41:20.861965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.862206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.862225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.862236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.866124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.875443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.875906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.875933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.875949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.876242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.876559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.876584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.876598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.881043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.889744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.890182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.890208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.890223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.890539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.890835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.890854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.890866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.894998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.904022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.904480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.904508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.904523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.904819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.905060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.905082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.905095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.908959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.918257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.918790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.918817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.918832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.919131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.919398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.919419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.919431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.923279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.932366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.932795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.932820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.932835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.933118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.933404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.933424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.933437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.937264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.946642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.947082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.947109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.947125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.947439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.947723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.947742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.947754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.951612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.960868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.961303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.961351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.961367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.961665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.961905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.961924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.961936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.965803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.975075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.975491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.975519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.975533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.975807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.976048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.976067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.976078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.979947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:20.989218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:20.989641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:20.989668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:20.989683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:20.989981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:20.990222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:20.990240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:20.990252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:20.994118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:21.003367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:21.003838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:21.003865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.615 [2024-07-12 13:41:21.003880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.615 [2024-07-12 13:41:21.004183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.615 [2024-07-12 13:41:21.004475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.615 [2024-07-12 13:41:21.004497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.615 [2024-07-12 13:41:21.004510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.615 [2024-07-12 13:41:21.008354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.615 [2024-07-12 13:41:21.017645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.615 [2024-07-12 13:41:21.018082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.615 [2024-07-12 13:41:21.018109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.616 [2024-07-12 13:41:21.018124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.616 [2024-07-12 13:41:21.018424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.616 [2024-07-12 13:41:21.018715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.616 [2024-07-12 13:41:21.018734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.616 [2024-07-12 13:41:21.018746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.616 [2024-07-12 13:41:21.022560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.616 [2024-07-12 13:41:21.031817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.616 [2024-07-12 13:41:21.032258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.616 [2024-07-12 13:41:21.032284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.616 [2024-07-12 13:41:21.032299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.616 [2024-07-12 13:41:21.032626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.616 [2024-07-12 13:41:21.032883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.616 [2024-07-12 13:41:21.032901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.616 [2024-07-12 13:41:21.032913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.616 [2024-07-12 13:41:21.036774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.616 [2024-07-12 13:41:21.046045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.616 [2024-07-12 13:41:21.046521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.616 [2024-07-12 13:41:21.046548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.616 [2024-07-12 13:41:21.046563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.616 [2024-07-12 13:41:21.046858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.616 [2024-07-12 13:41:21.047099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.616 [2024-07-12 13:41:21.047117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.616 [2024-07-12 13:41:21.047134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.616 [2024-07-12 13:41:21.050920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.616 [2024-07-12 13:41:21.060274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.616 [2024-07-12 13:41:21.060734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.616 [2024-07-12 13:41:21.060759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.616 [2024-07-12 13:41:21.060773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.616 [2024-07-12 13:41:21.061032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.616 [2024-07-12 13:41:21.061273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.616 [2024-07-12 13:41:21.061304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.616 [2024-07-12 13:41:21.061326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.616 [2024-07-12 13:41:21.065171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.616 [2024-07-12 13:41:21.074494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.616 [2024-07-12 13:41:21.074917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.616 [2024-07-12 13:41:21.074944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.616 [2024-07-12 13:41:21.074959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.616 [2024-07-12 13:41:21.075246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.616 [2024-07-12 13:41:21.075561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.616 [2024-07-12 13:41:21.075582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.616 [2024-07-12 13:41:21.075595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.616 [2024-07-12 13:41:21.079439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.088996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.089424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.089451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.089466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.089760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.090039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.090072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.090084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.094149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.103202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.103631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.103658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.103673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.103952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.104192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.104211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.104223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.108093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.117425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.117880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.117907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.117922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.118219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.118509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.118529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.118542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.122381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.131717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.132193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.132209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.132487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.132773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.132793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.132821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.137226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.146084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.146566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.146609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.146624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.146915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.147161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.147180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.147191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.151143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.160338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.160831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.160858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.160873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.161170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.161439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.161459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.161472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.165227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.174524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.876 [2024-07-12 13:41:21.174947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.876 [2024-07-12 13:41:21.174973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.876 [2024-07-12 13:41:21.174987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.876 [2024-07-12 13:41:21.175265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.876 [2024-07-12 13:41:21.175556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.876 [2024-07-12 13:41:21.175576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.876 [2024-07-12 13:41:21.175589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.876 [2024-07-12 13:41:21.179431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.876 [2024-07-12 13:41:21.188671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.189109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.189135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.189149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.189440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.189724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.189742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.189754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.193593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.202816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.203234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.203260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.203275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.203596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.203870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.203888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.203900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.207747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.217013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.217486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.217515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.217530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.217825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.218066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.218084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.218096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.221961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.231215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.231651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.231677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.231691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.231949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.232190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.232209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.232221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.236089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.245378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.245839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.245865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.245885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.246183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.246475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.246496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.246510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.250356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.259448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.259862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.259887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.259901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.260160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.260431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.260451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.260464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.264281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.273596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.274067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.274095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.274110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.274431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.274710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.274744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.274756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.278634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.287920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.288413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.288441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.288457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.288748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.288997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.289020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.289033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.292881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.302111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.302613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.302641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.302672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.302970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.303211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.303229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.303241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.307180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.316138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.316611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.316638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.316653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.316951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.317191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.317209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.317222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.877 [2024-07-12 13:41:21.320888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.877 [2024-07-12 13:41:21.330047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.877 [2024-07-12 13:41:21.330505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.877 [2024-07-12 13:41:21.330533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.877 [2024-07-12 13:41:21.330548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.877 [2024-07-12 13:41:21.330845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.877 [2024-07-12 13:41:21.331086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.877 [2024-07-12 13:41:21.331104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.877 [2024-07-12 13:41:21.331116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.878 [2024-07-12 13:41:21.334868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.878 [2024-07-12 13:41:21.344246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.878 [2024-07-12 13:41:21.344842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.878 [2024-07-12 13:41:21.344871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:23.878 [2024-07-12 13:41:21.344886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:23.878 [2024-07-12 13:41:21.345168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:23.878 [2024-07-12 13:41:21.345499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.878 [2024-07-12 13:41:21.345536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.878 [2024-07-12 13:41:21.345549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.349588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.358306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.358730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.358756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.358770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.359036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.359292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.359311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.359347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.363096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.372242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.372696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.372721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.372735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.372993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.373234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.373252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.373265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.377060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.386197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.386691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.386718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.386733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.387012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.387330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.387373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.387392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.391776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.400469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.400959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.400985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.401000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.401279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.401572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.401607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.401620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.405470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.414490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.414913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.414939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.414954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.415223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.415531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.415552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.415565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.419295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.428508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.428962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.428988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.429003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.429280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.429570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.429591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.429625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.433428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.442733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.443152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.443178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.443192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.443490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.443755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.443774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.443786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.447517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.456677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.457157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.457211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.457226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.457535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.457794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.457813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.457825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.461430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.470532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.471091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.137 [2024-07-12 13:41:21.471147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.137 [2024-07-12 13:41:21.471161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.137 [2024-07-12 13:41:21.471469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.137 [2024-07-12 13:41:21.471731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.137 [2024-07-12 13:41:21.471750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.137 [2024-07-12 13:41:21.471762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.137 [2024-07-12 13:41:21.475529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.137 [2024-07-12 13:41:21.484494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.137 [2024-07-12 13:41:21.485065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.485115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.485129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.485430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.485679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.485712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.485724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.489369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.498465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.498919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.498946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.498961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.499257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.499534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.499554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.499567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.503346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.512539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.512992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.513019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.513035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.513380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.513667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.513685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.513697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.517437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.526544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.526972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.526998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.527013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.527301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.527579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.527600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.527612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.531349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.540547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.540999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.541026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.541041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.541347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.541619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.541639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.541652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.545407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.554590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.555040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.555067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.555082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.555391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.555663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.555697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.555709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.559452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.568549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.568954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.568980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.568995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.569253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.569526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.569547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.569559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.573200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.582730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.583181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.583207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.583221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.583494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.583754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.583773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.583785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.587390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.138 [2024-07-12 13:41:21.596699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.138 [2024-07-12 13:41:21.597101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.138 [2024-07-12 13:41:21.597126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.138 [2024-07-12 13:41:21.597141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.138 [2024-07-12 13:41:21.597428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.138 [2024-07-12 13:41:21.597691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.138 [2024-07-12 13:41:21.597709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.138 [2024-07-12 13:41:21.597721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.138 [2024-07-12 13:41:21.601324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.397 [2024-07-12 13:41:21.611195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.397 [2024-07-12 13:41:21.611646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.397 [2024-07-12 13:41:21.611688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.397 [2024-07-12 13:41:21.611704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.397 [2024-07-12 13:41:21.611997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.397 [2024-07-12 13:41:21.612288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.397 [2024-07-12 13:41:21.612334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.397 [2024-07-12 13:41:21.612350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.397 [2024-07-12 13:41:21.616352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.397 [2024-07-12 13:41:21.625135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.397 [2024-07-12 13:41:21.625640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.397 [2024-07-12 13:41:21.625668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.397 [2024-07-12 13:41:21.625688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.397 [2024-07-12 13:41:21.625983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.397 [2024-07-12 13:41:21.626224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.397 [2024-07-12 13:41:21.626243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.397 [2024-07-12 13:41:21.626255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.397 [2024-07-12 13:41:21.630022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.397 [2024-07-12 13:41:21.639129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.397 [2024-07-12 13:41:21.639733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.397 [2024-07-12 13:41:21.639799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.397 [2024-07-12 13:41:21.639815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.397 [2024-07-12 13:41:21.640104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.397 [2024-07-12 13:41:21.640445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.397 [2024-07-12 13:41:21.640468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.397 [2024-07-12 13:41:21.640497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.397 [2024-07-12 13:41:21.644893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.397 [2024-07-12 13:41:21.653485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.397 [2024-07-12 13:41:21.653974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.397 [2024-07-12 13:41:21.654001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.397 [2024-07-12 13:41:21.654016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.397 [2024-07-12 13:41:21.654313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.397 [2024-07-12 13:41:21.654611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.397 [2024-07-12 13:41:21.654631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.397 [2024-07-12 13:41:21.654658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.397 [2024-07-12 13:41:21.658534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.397 [2024-07-12 13:41:21.667654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.397 [2024-07-12 13:41:21.668110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.397 [2024-07-12 13:41:21.668138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.397 [2024-07-12 13:41:21.668153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.397 [2024-07-12 13:41:21.668475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.397 [2024-07-12 13:41:21.668765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.397 [2024-07-12 13:41:21.668785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.397 [2024-07-12 13:41:21.668797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.672572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.681813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.682292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.682395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.682671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.682930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.682949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.682961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.687110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.696025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.696449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.696477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.696492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.696790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.697031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.697050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.697062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.700897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.710205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.710658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.710700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.710715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.710995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.711236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.711255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.711267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.715096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.724434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.725042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.725111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.725401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.725686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.725705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.725717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.729478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.738551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.739063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.739090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.739105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.739404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.739673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.739708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.739720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.743513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.753428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.753990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.754043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.754059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.754384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.754672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.754692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.754705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.758917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.768207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.768660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.768689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.768710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.769012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.769260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.769279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.769292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.773206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.782473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.782925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.782951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.782966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.783244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.783536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.783559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.783572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.787512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.796805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.797331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.797359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.797375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.797681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.797942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.797961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.797973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.801946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.810903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.811372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.811400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.811416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.811718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.398 [2024-07-12 13:41:21.811959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.398 [2024-07-12 13:41:21.811978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.398 [2024-07-12 13:41:21.811994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.398 [2024-07-12 13:41:21.815801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.398 [2024-07-12 13:41:21.825135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.398 [2024-07-12 13:41:21.825563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.398 [2024-07-12 13:41:21.825590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.398 [2024-07-12 13:41:21.825606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.398 [2024-07-12 13:41:21.825893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.399 [2024-07-12 13:41:21.826159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.399 [2024-07-12 13:41:21.826179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.399 [2024-07-12 13:41:21.826192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.399 [2024-07-12 13:41:21.830179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3727830 Killed "${NVMF_APP[@]}" "$@" 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.399 [2024-07-12 13:41:21.839535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.399 [2024-07-12 13:41:21.839991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.399 [2024-07-12 13:41:21.840018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.399 [2024-07-12 13:41:21.840033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.399 [2024-07-12 13:41:21.840313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3728778 00:33:24.399 [2024-07-12 13:41:21.840597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.399 [2024-07-12 13:41:21.840633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.399 [2024-07-12 13:41:21.840645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3728778 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3728778 ']' 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:24.399 13:41:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.399 [2024-07-12 13:41:21.844695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.399 [2024-07-12 13:41:21.854017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.399 [2024-07-12 13:41:21.854447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.399 [2024-07-12 13:41:21.854493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.399 [2024-07-12 13:41:21.854509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.399 [2024-07-12 13:41:21.854821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.399 [2024-07-12 13:41:21.855090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.399 [2024-07-12 13:41:21.855110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.399 [2024-07-12 13:41:21.855123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.399 [2024-07-12 13:41:21.859336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.868841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.869275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.869302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.869326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.869612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.869885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.869905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.869932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.874086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.883160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.883624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.883666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.883681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.883966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.884215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.884234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.884246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.887435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:24.658 [2024-07-12 13:41:21.887507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.658 [2024-07-12 13:41:21.888174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.897455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.897966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.897994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.898010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.898340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.898612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.898634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.898647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.903120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.911786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.912220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.912247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.912263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.912781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.913045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.913064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.913077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.917023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.658 [2024-07-12 13:41:21.926251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.926374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:24.658 [2024-07-12 13:41:21.926719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.926747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.926764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.927062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.927342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.927374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.927387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.931405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.940694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.941177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.941205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.941220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.941501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.941781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.941801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.941814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.945818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.954657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:24.658 [2024-07-12 13:41:21.955061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.955528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.955557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.955572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.955867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.956124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.956143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.956156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.960224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.969431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.970070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.970108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.970127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.970462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.970745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.970766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.970782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.974793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.983797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.984308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.984343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.984368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.984668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.984925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.984944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.984958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:21.988948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:21.998078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:21.998575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:21.998619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:21.998635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:21.998910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:21.999166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:21.999186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:21.999200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:22.003209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:22.012429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:22.013081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:22.013121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:22.013141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:22.013463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:22.013778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:22.013799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:22.013816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:22.017814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:22.026774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:22.027248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:22.027276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:22.027292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:22.027588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:22.027862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:22.027883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:22.027905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:22.031896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.658 [2024-07-12 13:41:22.041131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.658 [2024-07-12 13:41:22.041607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.658 [2024-07-12 13:41:22.041649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.658 [2024-07-12 13:41:22.041664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.658 [2024-07-12 13:41:22.041940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.658 [2024-07-12 13:41:22.042197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.658 [2024-07-12 13:41:22.042218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.658 [2024-07-12 13:41:22.042231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.658 [2024-07-12 13:41:22.046159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.658 [2024-07-12 13:41:22.046194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.658 [2024-07-12 13:41:22.046222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.659 [2024-07-12 13:41:22.046233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.659 [2024-07-12 13:41:22.046243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.659 [2024-07-12 13:41:22.046265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.659 [2024-07-12 13:41:22.046335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.659 [2024-07-12 13:41:22.046397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.659 [2024-07-12 13:41:22.046399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.659 [2024-07-12 13:41:22.055780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.659 [2024-07-12 13:41:22.056373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.659 [2024-07-12 13:41:22.056410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.659 [2024-07-12 13:41:22.056430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.659 [2024-07-12 13:41:22.056708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.659 [2024-07-12 13:41:22.056985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.659 [2024-07-12 13:41:22.057006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.659 [2024-07-12 13:41:22.057022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.659 [2024-07-12 13:41:22.061210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.659 [2024-07-12 13:41:22.070448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.659 [2024-07-12 13:41:22.071077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.659 [2024-07-12 13:41:22.071114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.659 [2024-07-12 13:41:22.071133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.659 [2024-07-12 13:41:22.071429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.659 [2024-07-12 13:41:22.071707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.659 [2024-07-12 13:41:22.071729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.659 [2024-07-12 13:41:22.071746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.659 [2024-07-12 13:41:22.076004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.659 [2024-07-12 13:41:22.085195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.659 [2024-07-12 13:41:22.085870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.659 [2024-07-12 13:41:22.085908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.659 [2024-07-12 13:41:22.085927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.659 [2024-07-12 13:41:22.086221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.659 [2024-07-12 13:41:22.086523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.659 [2024-07-12 13:41:22.086546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.659 [2024-07-12 13:41:22.086563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.659 [2024-07-12 13:41:22.090731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.659 [2024-07-12 13:41:22.099769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.659 [2024-07-12 13:41:22.100409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.659 [2024-07-12 13:41:22.100448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.659 [2024-07-12 13:41:22.100468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.659 [2024-07-12 13:41:22.100764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.659 [2024-07-12 13:41:22.101033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.659 [2024-07-12 13:41:22.101055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.659 [2024-07-12 13:41:22.101071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.659 [2024-07-12 13:41:22.105224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.659 [2024-07-12 13:41:22.114255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.659 [2024-07-12 13:41:22.114847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.659 [2024-07-12 13:41:22.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.659 [2024-07-12 13:41:22.114903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.659 [2024-07-12 13:41:22.115193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.659 [2024-07-12 13:41:22.115494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.659 [2024-07-12 13:41:22.115516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.659 [2024-07-12 13:41:22.115542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.659 [2024-07-12 13:41:22.119685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.129129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.129690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.129727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.129747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.130027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.130304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.130334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.130352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.134627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.143776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.144230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.144259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.144276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.144554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.144837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.144858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.144872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.148992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.158485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.158919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.158948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.158965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.159235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.159518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.159540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.159555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.163767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.917 [2024-07-12 13:41:22.173135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.173585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.173614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.173630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.173912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.174176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.174197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.174210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.178380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.187762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.188229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.188257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.188272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.188550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.188834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.188855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.188869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.917 [2024-07-12 13:41:22.192978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.194342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.917 [2024-07-12 13:41:22.202416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.202864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.202892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.202907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.203196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.203492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.203522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.203537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.207749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.216916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.217401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.217429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.217445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.217742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.217998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.218018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.218031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.222138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 [2024-07-12 13:41:22.231477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.917 [2024-07-12 13:41:22.232086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.917 [2024-07-12 13:41:22.232123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.917 [2024-07-12 13:41:22.232143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.917 [2024-07-12 13:41:22.232473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.917 [2024-07-12 13:41:22.232763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.917 [2024-07-12 13:41:22.232784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.917 [2024-07-12 13:41:22.232800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.917 [2024-07-12 13:41:22.236947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.917 Malloc0 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.917 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.917 [2024-07-12 13:41:22.246159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.918 [2024-07-12 13:41:22.246686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.918 [2024-07-12 13:41:22.246715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.918 [2024-07-12 13:41:22.246733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.918 [2024-07-12 13:41:22.247027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.918 [2024-07-12 13:41:22.247308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.918 [2024-07-12 13:41:22.247339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.918 [2024-07-12 13:41:22.247355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.918 [2024-07-12 13:41:22.251675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.918 [2024-07-12 13:41:22.261067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.918 [2024-07-12 13:41:22.261499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.918 [2024-07-12 13:41:22.261528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95bb50 with addr=10.0.0.2, port=4420 00:33:24.918 [2024-07-12 13:41:22.261545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bb50 is same with the state(5) to be set 00:33:24.918 [2024-07-12 13:41:22.261829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bb50 (9): Bad file descriptor 00:33:24.918 [2024-07-12 13:41:22.262093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.918 [2024-07-12 13:41:22.262114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.918 [2024-07-12 13:41:22.262127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.918 [2024-07-12 13:41:22.262146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.918 [2024-07-12 13:41:22.266367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.918 13:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3727992 00:33:24.918 [2024-07-12 13:41:22.275749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.174 [2024-07-12 13:41:22.403555] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:35.144 00:33:35.144 Latency(us) 00:33:35.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.144 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:35.144 Verification LBA range: start 0x0 length 0x4000 00:33:35.144 Nvme1n1 : 15.00 6631.66 25.90 7994.74 0.00 8724.41 904.15 15825.73 00:33:35.144 =================================================================================================================== 00:33:35.144 Total : 6631.66 25.90 7994.74 0.00 8724.41 904.15 15825.73 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:35.144 rmmod nvme_tcp 00:33:35.144 rmmod nvme_fabrics 00:33:35.144 rmmod nvme_keyring 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3728778 ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3728778 ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728778' 00:33:35.144 killing process with pid 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3728778 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.144 13:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.557 13:41:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:36.557 00:33:36.557 real 0m22.416s 00:33:36.557 user 0m55.983s 00:33:36.557 sys 0m5.779s 00:33:36.557 13:41:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:36.557 13:41:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:36.557 ************************************ 00:33:36.557 END TEST nvmf_bdevperf 00:33:36.557 ************************************ 00:33:36.557 13:41:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:36.557 13:41:33 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:36.557 13:41:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:36.557 13:41:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.557 13:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:36.557 ************************************ 00:33:36.557 START TEST nvmf_target_disconnect 00:33:36.557 ************************************ 00:33:36.557 13:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:36.815 * Looking for test storage... 00:33:36.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:36.815 13:41:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:38.713 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.713 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:38.714 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:38.714 Found net devices under 0000:09:00.0: cvl_0_0 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:38.714 Found net devices under 0000:09:00.1: cvl_0_1 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.714 13:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:38.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:33:38.714 00:33:38.714 --- 10.0.0.2 ping statistics --- 00:33:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.714 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:33:38.714 00:33:38.714 --- 10.0.0.1 ping statistics --- 00:33:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.714 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 ************************************ 00:33:38.714 START TEST nvmf_target_disconnect_tc1 00:33:38.714 ************************************ 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:38.714 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.972 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.972 [2024-07-12 13:41:36.223350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.972 [2024-07-12 13:41:36.223419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197f3e0 with addr=10.0.0.2, port=4420 00:33:38.972 [2024-07-12 13:41:36.223456] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:38.972 [2024-07-12 13:41:36.223474] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:38.972 [2024-07-12 13:41:36.223487] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:38.972 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:38.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:38.972 Initializing NVMe Controllers 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.972 00:33:38.972 real 0m0.094s 00:33:38.972 user 0m0.035s 00:33:38.972 sys 0m0.058s 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:38.972 ************************************ 00:33:38.972 END TEST nvmf_target_disconnect_tc1 00:33:38.972 ************************************ 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.972 ************************************ 00:33:38.972 START TEST nvmf_target_disconnect_tc2 00:33:38.972 ************************************ 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3731819 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3731819 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3731819 ']' 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.972 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.972 [2024-07-12 13:41:36.338191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:38.972 [2024-07-12 13:41:36.338262] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.972 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.972 [2024-07-12 13:41:36.373941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:38.972 [2024-07-12 13:41:36.401270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:39.229 [2024-07-12 13:41:36.488791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.229 [2024-07-12 13:41:36.488841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.229 [2024-07-12 13:41:36.488855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.229 [2024-07-12 13:41:36.488866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.230 [2024-07-12 13:41:36.488876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.230 [2024-07-12 13:41:36.488963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:39.230 [2024-07-12 13:41:36.489026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:39.230 [2024-07-12 13:41:36.489091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:39.230 [2024-07-12 13:41:36.489093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 Malloc0 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 [2024-07-12 13:41:36.671503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.230 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.230 [2024-07-12 13:41:36.699736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3731957 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:39.488 13:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:39.488 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.400 13:41:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3731819 00:33:41.400 13:41:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Write completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.400 starting I/O failed 00:33:41.400 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 [2024-07-12 13:41:38.725771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 [2024-07-12 13:41:38.726119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 [2024-07-12 13:41:38.726440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Write completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.401 starting I/O failed 00:33:41.401 Read completed with error (sct=0, sc=8) 00:33:41.402 starting I/O failed 00:33:41.402 Write completed with error (sct=0, sc=8) 00:33:41.402 starting I/O failed 00:33:41.402 Write completed with error (sct=0, sc=8) 00:33:41.402 starting I/O failed 00:33:41.402 Read completed with error (sct=0, sc=8) 00:33:41.402 starting I/O failed 00:33:41.402 [2024-07-12 13:41:38.726721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.402 [2024-07-12 13:41:38.726917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.726949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.727136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.727311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.727494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.727656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.727831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.727992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.728017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.728200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.728224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.728395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.728421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.728561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.728586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.728750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.728775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.728994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.729054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.729216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.729241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.729401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.729427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.729564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.729589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.729732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.729757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.729998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.730208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.730386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.730540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.730723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.730906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.730947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.731971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.731997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.732182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.732206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.732340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.732372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.732512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.732537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.732697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.732736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.732892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.732930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.733100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.733125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.733313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.733346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.733525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.733729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.733754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.733993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.402 [2024-07-12 13:41:38.734032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.402 qpair failed and we were unable to recover it. 00:33:41.402 [2024-07-12 13:41:38.734218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.734243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.734392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.734417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.734567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.734606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.734774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.734799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.734955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.734979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.735130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.735159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.735310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.735341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.735498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.735523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.735676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.735715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.735908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.735932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.736048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.736073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.736230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.736255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.736385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.736410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.736573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.736598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.736858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.736909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.737948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.737987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.738136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.738160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.738338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.738372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.738517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.738541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.738694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.738733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.738894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.738919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.739954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.739979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.740153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.740178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.740340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.740379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.740523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.740551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.740722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.740754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.740905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.740932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.741087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.741113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.741292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.741324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.741521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.741547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.403 qpair failed and we were unable to recover it. 00:33:41.403 [2024-07-12 13:41:38.741728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.403 [2024-07-12 13:41:38.741754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.741912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.741938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.742099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.742126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.742278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.742303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.742442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.742467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.742733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.742787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.742964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.742989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.743145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.743203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.743442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.743470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.743624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.743649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.743802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.743828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.743984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.744164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.744324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.744503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.744709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.744916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.744941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.745951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.745976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.746123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.746150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.746268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.746293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.746477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.746503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.746634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.746660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.746833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.746858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.747909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.747933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.748956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.748981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.404 [2024-07-12 13:41:38.749113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.404 [2024-07-12 13:41:38.749139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.404 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.749268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.749294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.749501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.749540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.749681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.749708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.749844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.749871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.750150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.750204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.750361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.750406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.750562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.750587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.750739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.750764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.750896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.750922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.751933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.751975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.752227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.752265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.752445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.752474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.752606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.752633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.752784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.752810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.752992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.753188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.753375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.753579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.753783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.753938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.753965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.754144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.754170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.754326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.754352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.754586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.754611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.754789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.754814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.754968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.754995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.755131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.755156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.405 qpair failed and we were unable to recover it. 00:33:41.405 [2024-07-12 13:41:38.755336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.405 [2024-07-12 13:41:38.755361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.755521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.755546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.755676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.755703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.755871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.755909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.756096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.756140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.756339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.756366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.756519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.756544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.756700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.756724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.756900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.756925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.757928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.757953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.758123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.758147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.758302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.758333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.758485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.758510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.758662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.758687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.758864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.758888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.759929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.759956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.760134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.760289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.760486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.760656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.760856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.760993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.761018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.761181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.761219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.761383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.761411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.761533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.761558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.761765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.761832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.762083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.762133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.762295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.762326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.762463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.762490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.762675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.762701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.762856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.406 [2024-07-12 13:41:38.762881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.406 qpair failed and we were unable to recover it. 00:33:41.406 [2024-07-12 13:41:38.763039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.763064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.763248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.763273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.763433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.763459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.763592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.763617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.763799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.763824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.763978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.764845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.764997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.765146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.765329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.765492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.765675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.765852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.765877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.766843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.766870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.767892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.767918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.768965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.768990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.769147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.769173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.769351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.769377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.769535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.769560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.769688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.769713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.769867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.407 [2024-07-12 13:41:38.769892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.407 qpair failed and we were unable to recover it. 00:33:41.407 [2024-07-12 13:41:38.770046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.770072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.770255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.770280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.770433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.770471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.770631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.770658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.770817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.770842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.770997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.771197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.771345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.771521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.771702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.771848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.771872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.772964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.772990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.773146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.773173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.773362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.773387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.773507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.773532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.773690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.773715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.773924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.773984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.774160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.774185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.774363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.774389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.774513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.774540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.774673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.774700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.774878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.774903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.775030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.775055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.775222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.775248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.775468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.775646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.775673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.775892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.775917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.776936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.776961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.777093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.777119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.777296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.408 [2024-07-12 13:41:38.777478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.408 [2024-07-12 13:41:38.777517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.408 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.777683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.777733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.777892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.777919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.778106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.778132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.778299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.778335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.778465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.778639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.778665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.778795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.778820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.779057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.779212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.779366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.779524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.779713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.779967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.780028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.780301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.780365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.780491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.780517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.780676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.780702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.780932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.780982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.781249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.781276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.781437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.781463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.781631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.781656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.781804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.781829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.782820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.782845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.783009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.783035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.783237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.783262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.783384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.783410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.783545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.783572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.783751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.783797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.784120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.784163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.409 [2024-07-12 13:41:38.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.409 [2024-07-12 13:41:38.784478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.409 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.784601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.784627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.784764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.784940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.784981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.785221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.785263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.785464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.785492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.785650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.785692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.785963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.786034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.786258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.786302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.786481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.786508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.786664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.786690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.786815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.787021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.787261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.787427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.787453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.787607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.787632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.787748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.787774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.788005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.788047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.788243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.788269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.788397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.788422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.788546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.788571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.788767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.788809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.789046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.789326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.789502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.789704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.789861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.789994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.790262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.790451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.790609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.790785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.790961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.790988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.791196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.791221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.791382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.791409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.791567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.791592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.791768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.791794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.792018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.792079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.792305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.792336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.792471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.792498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.410 [2024-07-12 13:41:38.792681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.410 [2024-07-12 13:41:38.792706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.410 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.792863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.792889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.793016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.793064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.793313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.793343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.793501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.793526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.793706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.793732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.793863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.793888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.794890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.794938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.795269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.795312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.795482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.795508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.795731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.795791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.795996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.796177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.796444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.796629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.796786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.796974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.796999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.797185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.797228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.797529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.797590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.797835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.797860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.798021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.798047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.798245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.798288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.798541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.798602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.798858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.798917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.799181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.799223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.799503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.799564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.799780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.799806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.799962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.799989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.800186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.800228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.800545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.800604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.800847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.800909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.801145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.801171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.801353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.801565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.801590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.801749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.801775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.801937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.411 [2024-07-12 13:41:38.801962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.411 qpair failed and we were unable to recover it. 00:33:41.411 [2024-07-12 13:41:38.802191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.802216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.802393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.802419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.802549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.802593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.802826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.802852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.803035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.803190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.803397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.803647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.803826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.803988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.804014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.804168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.804214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.804491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.804753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.804798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.805049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.805092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.805326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.805352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.805502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.805557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.805879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.805946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.806187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.806230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.806556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.806621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.806894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.806957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.807210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.807255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.807560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.807622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.807970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.808045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.808231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.808255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.808378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.808404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.808573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.808598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.808918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.808980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.809247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.809290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.809579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.809641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.809982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.810046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.810305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.810629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.810672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.810965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.811027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.811251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.811277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.811444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.811471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.811629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.811678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.811906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.811932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.812116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.812141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.812296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.812328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.812579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.412 [2024-07-12 13:41:38.812622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.412 qpair failed and we were unable to recover it. 00:33:41.412 [2024-07-12 13:41:38.812860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.812903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.813138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.813164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.813368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.813412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.813645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.813671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.813806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.813850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.814168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.814512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.814582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.814929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.814994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.815274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.815324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.815534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.815578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.815876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.815938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.816235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.816298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.816527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.816569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.816862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.816929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.817224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.817284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.817598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.817668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.817986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.818047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.818281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.818332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.818599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.818642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.818924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.818968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.819223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.819266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.819526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.819570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.819866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.819929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.820288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.820365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.820619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.820661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.820942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.821005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.821285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.821336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.821585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.821610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.821798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.821869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.822134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.822196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.822524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.822590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.822865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.822928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.823176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.823218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.823447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.823492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.823838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.823913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.824254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.824333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.824653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.824716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.825054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.825119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.825336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.825379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.825631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.413 [2024-07-12 13:41:38.825674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.413 qpair failed and we were unable to recover it. 00:33:41.413 [2024-07-12 13:41:38.825978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.826003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.826124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.826149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.826453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.826521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.826873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.826936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.827147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.827189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.827454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.827518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.827862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.827937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.828219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.828261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.828593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.828657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.829017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.829080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.829323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.829367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.829575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.829620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.829930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.829956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.830286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.830382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.830608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.830648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.830985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.831045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.831277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.831303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.831517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.831559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.831904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.831973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.832377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.832421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.832665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.832709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.833023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.833085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.833271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.833295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.833471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.833498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.833676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.833702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.833880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.833956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.834354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.834397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.834635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.834679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.834965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.835033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.835273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.835327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.835603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.835645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.835949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.836012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.836252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.836278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.836415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.836442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.414 qpair failed and we were unable to recover it. 00:33:41.414 [2024-07-12 13:41:38.836772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.414 [2024-07-12 13:41:38.836846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.837118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.837162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.837441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.837507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.837840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.837903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.838209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.838279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.838656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.838717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.839023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.839086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.839311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.839374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.839622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.839666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.839975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.840038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.840276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.840327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.840595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.840637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.840955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.841001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.841227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.841269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.841512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.841668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.841694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.841895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.841966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.842197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.842239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.842469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.842496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.842630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.842656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.842782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.842808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.842958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.843138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.843163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.843295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.843345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.843642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.843703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.844056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.844120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.844330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.844356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.844567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.844632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.844983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.845047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.845265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.845308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.845636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.845704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.846053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.846115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.846383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.846427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.846744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.846805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.847041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.847068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.847263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.847289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.847471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.847516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.847813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.847875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.848168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.848230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.848470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.848520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.415 qpair failed and we were unable to recover it. 00:33:41.415 [2024-07-12 13:41:38.848819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.415 [2024-07-12 13:41:38.848844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.849042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.849068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.849332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.849376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.849618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.849662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.850013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.850077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.850326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.850370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.850624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.850650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.850784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.850809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.850967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.850992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.851212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.851256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.851510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.851554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.851809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.851880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.852163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.852227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.852544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.852608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.852963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.853027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.853260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.853285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.853437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.853824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.853890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.854217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.854292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.854576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.854619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.854927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.854989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.855195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.855242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.855471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.855515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.855826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.855893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.856164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.856208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.856536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.856600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.856903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.856966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.857226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.857268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.857597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.857672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.857948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.858013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.858276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.858312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.858478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.858514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.858779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.858808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.858962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.859008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.859254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.859299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.859564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.859607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.859852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.859877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.860031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.860056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.860252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.860294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.860528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.860560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.416 [2024-07-12 13:41:38.860767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.416 [2024-07-12 13:41:38.860848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.416 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.861232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.861281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.861593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.861661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.861957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.861983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.862134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.862159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.862409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.862453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.862773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.862841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.863137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.863219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.417 [2024-07-12 13:41:38.863508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.417 [2024-07-12 13:41:38.863554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.417 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.863768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.863794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.864003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.864072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.864284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.864310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.864536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.864602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.864842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.864869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.865021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.865048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.865241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.865284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.865595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.865658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.866018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.866083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.866373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.866417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.866707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.866769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.867059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.867101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.867335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.867379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.867559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.867586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.867715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.867741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.867969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.867995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.868126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.868151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.868343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.868399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.689 qpair failed and we were unable to recover it. 00:33:41.689 [2024-07-12 13:41:38.868753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.689 [2024-07-12 13:41:38.868827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.869117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.869181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.869421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.869465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.869773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.869845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.870172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.870215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.870522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.870606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.870944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.871232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.871415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.871579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.871758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.871935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.871960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.872093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.872123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.872256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.872282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.872470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.872496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.872663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.872714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.872943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.872969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.873129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.873155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.873339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.873383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.873659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.873734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.874048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.874115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.874353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.874379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.874512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.874538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.874663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.874688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.874838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.874863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.875006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.690 [2024-07-12 13:41:38.875165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.690 [2024-07-12 13:41:38.875191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.690 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.875328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.875356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.875618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.875661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.876018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.876081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.876356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.876400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.876700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.876725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.876856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.876881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.877035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.877060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.877187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.877212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.877482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.877546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.877829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.877854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.877998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.878024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.878279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.878333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.878635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.878660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.878837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.878863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.879011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.879037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.879190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.879215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.879376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.879402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.879556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.879583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.879735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.880049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.880091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.880380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.880453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.880695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.880720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.880945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.881008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.881267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.881309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.881643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.881714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.882037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.882107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.691 qpair failed and we were unable to recover it. 00:33:41.691 [2024-07-12 13:41:38.882388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.691 [2024-07-12 13:41:38.882432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.882729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.882800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.883112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.883176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.883449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.883492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.883851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.883920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.884219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.884263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.884571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.884597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.884774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.885119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.885185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.885429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.885473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.885797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.885862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.886185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.886246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.886511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.886554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.886835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.886862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.887011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.887036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.887207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.887252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.887488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.887532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.887853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.887915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.888192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.888234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.888588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.888659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.889009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.889069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.889324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.889350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.889562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.889605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.889901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.889964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.890245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.890306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.890557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.890617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.890957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.891033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.692 [2024-07-12 13:41:38.891276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.692 [2024-07-12 13:41:38.891330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.692 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.891599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.891661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.892006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.892073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.892285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.892310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.892465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.892509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.892827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.892891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.893176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.893202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.893335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.893361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.893514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.893562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.893797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.893823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.893955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.893995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.894153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.894205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.894498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.894569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.894893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.894956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.895183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.895389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.895416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.895682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.895746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.896109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.896175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.896513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.896580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.896940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.897241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.897283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.897607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.897677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.897984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.898046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.898329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.898372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.898583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.898625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.898874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.898937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.899249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.899292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.899521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.899563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.899891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.899954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.900277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.900349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.900561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.900606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.900934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.901003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.901243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.901287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.901554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.901597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.693 [2024-07-12 13:41:38.901932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.693 [2024-07-12 13:41:38.901990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.693 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.902343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.902387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.902605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.902647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.902970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.903043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.903274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13480 is same with the state(5) to be set 00:33:41.694 [2024-07-12 13:41:38.903807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.903916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.904195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.904222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.904378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.904406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.904543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.904877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.904942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.905312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.905395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.905622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.905698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.906097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.906162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.906497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.906541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.906893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.906957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.907274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.907300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.907437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.907463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.907731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.907767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.908002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.908027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.908312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.908390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.908671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.908735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.909092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.909156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.909506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.909549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.909875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.909939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.910330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.910391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.910666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.910709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.911081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.911144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.911445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.911491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.911763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.911807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.912172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.912240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.912498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.912544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.912793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.912837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.913152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.913217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.913527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.913554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.913729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.913755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.913936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.914001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.914298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.914385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.694 qpair failed and we were unable to recover it. 00:33:41.694 [2024-07-12 13:41:38.914627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.694 [2024-07-12 13:41:38.914703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.915088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.915152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.915460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.915503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.915720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.915745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.915880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.915905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.916060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.916085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.916264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.916290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.916424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.916475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.916761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.916786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.917067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.917134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.917490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.917534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.917795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.917820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.917975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.918000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.918295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.918372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.918611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.918636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.918824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.918887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.919198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.919261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.919533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.919559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.919701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.919725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.919918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.919943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.920279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.920377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.920652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.920695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.921118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.921181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.921515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.921560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.921907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.921974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.922340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.922407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.922728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.922795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.923147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.923214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.923575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.923640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.924033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.924097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.924456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.924520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.924842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.924868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.925182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.925244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.695 [2024-07-12 13:41:38.925650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.695 [2024-07-12 13:41:38.925714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.695 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.926099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.926163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.926578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.926653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.927013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.927077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.927422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.927490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.927890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.927956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.928363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.928429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.928720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.928745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.928925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.928950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.929210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.929277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.929657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.929721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.930074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.930141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.930518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.930591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.930916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.930941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.931097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.931122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.931464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.931530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.931899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.931962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.932300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.932377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.932734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.932799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.933119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.933185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.933470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.933496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.933657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.933683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.933840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.933866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.934176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.934244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.934613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.934681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.935073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.935136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.935487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.935551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.935911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.935973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.936275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.936301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.936449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.936522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.936825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.936850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.936997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.937043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.937391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.937456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.937775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.937840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.938155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.938222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.938616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.938682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.939038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.939105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.939504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.939568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.939950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.940014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.940409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.696 qpair failed and we were unable to recover it. 00:33:41.696 [2024-07-12 13:41:38.940820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.696 [2024-07-12 13:41:38.940887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.941282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.941378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.941746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.941824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.942143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.942169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.942441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.942508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.942865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.942930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.943245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.943271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.943590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.943657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.943974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.944000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.944159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.944215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.944559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.944624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.945006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.945069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.945389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.945415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.945637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.945701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.946103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.946167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.946528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.946924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.946989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.947360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.947425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.947817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.947882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.948207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.948232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.948548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.948612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.948970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.949036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.949418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.949483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.949835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.949902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.950256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.950336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.950663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.950688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.950994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.951058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.951413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.951478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.951869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.951932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.952250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.952333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.952698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.952765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.953089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.953114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.953431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.953457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.953577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.953604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.954000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.954063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.954438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.954502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.954898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.954961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.955209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.955235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.955461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.955536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.697 [2024-07-12 13:41:38.955810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.697 [2024-07-12 13:41:38.955835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.697 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.955983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.956039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.956382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.956448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.956782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.956857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.957173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.957238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.957540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.957697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.957764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.958091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.958117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.958248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.958273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.958543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.958609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.958931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.958998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.959399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.959464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.959799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.959862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.960247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.960311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.960738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.960803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.961088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.961114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.961248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.961273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.961605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.961670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.962040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.962105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.962393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.962419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.962595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.962620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.962799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.962825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.963000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.963063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.963381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.963407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.963747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.963811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.964160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.964227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.964605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.964670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.965060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.965124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.965479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.965545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.965926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.965989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.966311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.966400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.966762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.966830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.967219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.967286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.967628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.967654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.967810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.967837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.968104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.968168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.968563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.968629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.968951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.968977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.969249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.969312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.969687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.969751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.698 qpair failed and we were unable to recover it. 00:33:41.698 [2024-07-12 13:41:38.970132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.698 [2024-07-12 13:41:38.970197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.970511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.970537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.970682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.970738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.971062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.971145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.971517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.971584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.971915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.971981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.972270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.972296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.972456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.972483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.972755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.972819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.973161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.973229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.973605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.973672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.974036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.974101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.974421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.974448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.974602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.974628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.974749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.974776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.974980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.975044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.975413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.975478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.975831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.975899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.976305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.976391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.976679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.976704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.976839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.976895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.977221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.977285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.977650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.977715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.978094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.978157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.978514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.978579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.978986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.979051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.979396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.979461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.979799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.979866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.980222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.980286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.980664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.980728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.981119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.981184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.981521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.981547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.981732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.981758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.699 qpair failed and we were unable to recover it. 00:33:41.699 [2024-07-12 13:41:38.982049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.699 [2024-07-12 13:41:38.982115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.982432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.982498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.982842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.982907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.983291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.983369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.983692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.983755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.984135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.984198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.984553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.984617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.984930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.985086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.985112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.985469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.985537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.985831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.985861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.985993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.986234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.986299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.986692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.986755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.987061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.987086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.987258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.987283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.987481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.987546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.987838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.987902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.988260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.988335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.988693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.988757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.989051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.989077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.989205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.989232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.989630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.989695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.990089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.990152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.990560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.990633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.990928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.990953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.991107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.991133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.991432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.991496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.991797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.991862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.992260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.992359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.992763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.992829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.993165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.993228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.993643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.993709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.994106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.994170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.994537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.994609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.994917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.994984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.995372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.995437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.995789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.995854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.996209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.996275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.996611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.996674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.700 qpair failed and we were unable to recover it. 00:33:41.700 [2024-07-12 13:41:38.996994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.700 [2024-07-12 13:41:38.997062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.997419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.997486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.997866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.997930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.998248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.998312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.998630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.998696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.999045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.999111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.999479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.999545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:38.999858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:38.999922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.000240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.000304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.000697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.000761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.001058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.001088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.001267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.001293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.001669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.001735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.002096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.002160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.002479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.002545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.002907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.002972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.003382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.003448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.003827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.003889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.004239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.004305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.004722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.004788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.005106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.005172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.005524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.005589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.005958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.006022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.006379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.006444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.006854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.006919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.007205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.007272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.007646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.007713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.008108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.008172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.008521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.008547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.008700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.008727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.008955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.008981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.009111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.009137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.009290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.009321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.009712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.009813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.010222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.010292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.010708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.010775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.011103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.011169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.011471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.011499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.011668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.011694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.011833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.701 [2024-07-12 13:41:39.011861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.701 qpair failed and we were unable to recover it. 00:33:41.701 [2024-07-12 13:41:39.012094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.012165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.012490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.012559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.012953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.013018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.013467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.013539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.013945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.014010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.014365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.014432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.014783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.014851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.015171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.015236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.015543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.015577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.015773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.015807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.015974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.016016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.016265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.016360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.016575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.016627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.016893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.016919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.017952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.017978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.018151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.018307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.018473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.018666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.018847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.018983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.019131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.019328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.019496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.019716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.019935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.019968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.020265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.020292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.020467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.020500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.020688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.020723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.021000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.021065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.021342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.021369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.021522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.021549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.021874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.021934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.022343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.022404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.022567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.702 [2024-07-12 13:41:39.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.702 qpair failed and we were unable to recover it. 00:33:41.702 [2024-07-12 13:41:39.022925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.022959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.023257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.023522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.023554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.023733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.023767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.024059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.024432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.024467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.024649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.024713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.025005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.025069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.025414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.025446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.025603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.025652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.025862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.025942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.026265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.026356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.026525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.026558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.026815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.026849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.027093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.027154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.027395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.027428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.027594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.027626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.027823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.027872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.028113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.028176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.028450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.028482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.028649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.028681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.028864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.028896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.029078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.029137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.029397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.029430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.029607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.029639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.029868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.029926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.030253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.030312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.030517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.030548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.030881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.030963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.031286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.031324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.031481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.031514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.031784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.031846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.032161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.703 [2024-07-12 13:41:39.032221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.703 qpair failed and we were unable to recover it. 00:33:41.703 [2024-07-12 13:41:39.032461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.032495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.032651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.032683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.032967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.033026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.033365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.033398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.033600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.033632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.033959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.034026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.034311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.034351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.034503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.034535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.034732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.034765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.035106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.035169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.035465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.035498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.035695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.035753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.036057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.036115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.036397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.036430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.036619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.036652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.036996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.037057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.037386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.037419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.037626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.037719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.038032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.038094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.038431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.038465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.038642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.038675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.038833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.038866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.039032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.039096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.039386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.039418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.039597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.039629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.039820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.039898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.040185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.040245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.040477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.040510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.040668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.040700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.040880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.040912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.041123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.041185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.041440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.041472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.041671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.041731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.042059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.042124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.042449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.042481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.042648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.042680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.042866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.042900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.043278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.043371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.043536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.043568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.043889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.704 [2024-07-12 13:41:39.043949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.704 qpair failed and we were unable to recover it. 00:33:41.704 [2024-07-12 13:41:39.044326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.044386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.044555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.044588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.044745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.044778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.045022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.045086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.045414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.045447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.045662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.045950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.046010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.046334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.046394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.046547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.046580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.046860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.046894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.047121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.047182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.047485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.047517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.047799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.047861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.048141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.048208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.048464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.048497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.048788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.048847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.049176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.049235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.049490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.049529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.049850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.049910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.050232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.050290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.050526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.050558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.050814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.050872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.051186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.051505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.051539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.051726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.051758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.051985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.052045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.052399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.052432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.052658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.052717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.053001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.053033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.053272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.053342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.053640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.053909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.053968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.054234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.054265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.054485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.054518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.054721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.054780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.055104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.055165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.055497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.055557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.055917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.055976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.056286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.056358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.056728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.705 [2024-07-12 13:41:39.056788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.705 qpair failed and we were unable to recover it. 00:33:41.705 [2024-07-12 13:41:39.057079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.057111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.057263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.057347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.057649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.057709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.058024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.058384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.058453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.058748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.058808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.059222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.059581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.059641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.059922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.059984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.060385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.060695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.060727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.060906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.060939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.061233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.061265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.061487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.061519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.061792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.061872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.062261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.062334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.062688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.062719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.062960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.063029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.063304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.063342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.063596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.063655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.063972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.064030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.064368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.064428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.064749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.064810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.065101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.065161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.065526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.065587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.065862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.065895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.066100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.066179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.066500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.066533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.066696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.066728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.067000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.067059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.067338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.067371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.067587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.067618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.067859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.067918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.068241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.068304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.068679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.068757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.069119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.069179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.069504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.069567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.069940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.069999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.070336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.070397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.706 [2024-07-12 13:41:39.070677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.706 [2024-07-12 13:41:39.070736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.706 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.071016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.071075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.071418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.071798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.071859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.072214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.072294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.072693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.072752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.073051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.073111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.073477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.073831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.073890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.074210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.074269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.074642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.074708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.075035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.075098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.075450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.075515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.075833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.075865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.076027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.076059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.076217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.076249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.076421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.076454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.076688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.076750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.077033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.077123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.077485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.077545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.077805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.077838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.078024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.078086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.078409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.078469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.078746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.078806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.079119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.079178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.079463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.079522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.079878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.079938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.080229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.080290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.080640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.080720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.081064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.081503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.081563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.081925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.081990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.082351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.082412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.082699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.082758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.083052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.083113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.083486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.083547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.083837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.083898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.084268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.084348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.084686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.084717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.084902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.084934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.707 [2024-07-12 13:41:39.085179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.707 [2024-07-12 13:41:39.085242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.707 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.085584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.085644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.085962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.086021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.086400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.086466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.086776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.086808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.087040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.087104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.087491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.087552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.087856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.087887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.088105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.088164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.088524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.088585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.088910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.088991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.089337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.089402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.089706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.089738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.089943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.090006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.090291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.090391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.090725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.090757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.090955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.091037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.091409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.091781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.091845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.092170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.092201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.092361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.092394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.092575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.092608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.092918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.092999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.093361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.093421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.093758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.093818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.094075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.094107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.094326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.094366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.094602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.094670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.095029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.095096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.095422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.095490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.095848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.095915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.096255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.096287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.096567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.096632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.096994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.708 [2024-07-12 13:41:39.097368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.708 [2024-07-12 13:41:39.097412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.708 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.097604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.097673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.098019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.098087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.098417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.098482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.098826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.098893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.099237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.099302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.099688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.099752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.100096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.100164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.100554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.100620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.101023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.101087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.101482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.101547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.101925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.102003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.102364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.102431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.102786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.103172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.103239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.103636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.103701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.104052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.104119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.104473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.104538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.104895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.104958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.105306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.105382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.105744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.105809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.106176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.106241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.106601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.106667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.107025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.107090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.107477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.107542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.107943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.108008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.108359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.108424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.108820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.108885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.109195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.109259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.109597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.109661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.110020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.110087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.110527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.110593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.110943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.111008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.111367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.111433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.111781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.111845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.112173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.112241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.112628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.112694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.113048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.113113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.113510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.113576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.709 qpair failed and we were unable to recover it. 00:33:41.709 [2024-07-12 13:41:39.113988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.709 [2024-07-12 13:41:39.114053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.114392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.114456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.114778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.114843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.115191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.115255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.115654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.115719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.116113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.116178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.116533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.116598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.116911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.116978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.117275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.117357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.117700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.117767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.118086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.118154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.118604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.118957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.119032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.119403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.119469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.119825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.119890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.120278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.120363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.120762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.120828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.121176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.121241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.121621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.121686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.122065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.122128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.122529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.122594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.122950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.123017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.123376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.123442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.123818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.123881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.124238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.124713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.124779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.125135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.125202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.125525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.125946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.126011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.126372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.126437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.126783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.126847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.127201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.127265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.127632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.127697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.128083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.128146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.128492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.128558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.128912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.128976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.129345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.129410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.129727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.129791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.130168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.130234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.130661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.710 [2024-07-12 13:41:39.130727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.710 qpair failed and we were unable to recover it. 00:33:41.710 [2024-07-12 13:41:39.131116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.131180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.131539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.131606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.131976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.132040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.132421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.132486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.132833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.132897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.133284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.133360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.133715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.133779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.134121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.134184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.134533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.134598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.134946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.135012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.135362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.135428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.135775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.135840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.136186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.136262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.136626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.136692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.137076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.137142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.137473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.137540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.137899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.137963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.138347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.138413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.138766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.138834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.139145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.139212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.139555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.139621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.140007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.140071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.140419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.140487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.140870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.140935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.141248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.141329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.141657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.141724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.142076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.142141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.142463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.142529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.142874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.142937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.143343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.143408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.143723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.143792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.144110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.144178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.144566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.144632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.144991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.145054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.145409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.145477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.145816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.145881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.146207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.146653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.146719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.147073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.147141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.147548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.711 [2024-07-12 13:41:39.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.711 qpair failed and we were unable to recover it. 00:33:41.711 [2024-07-12 13:41:39.147959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.712 [2024-07-12 13:41:39.148023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.712 qpair failed and we were unable to recover it. 00:33:41.712 [2024-07-12 13:41:39.148381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.712 [2024-07-12 13:41:39.148448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.712 qpair failed and we were unable to recover it. 00:33:41.712 [2024-07-12 13:41:39.148766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.712 [2024-07-12 13:41:39.148829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.712 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-12 13:41:39.149172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-12 13:41:39.149236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-12 13:41:39.149565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-12 13:41:39.149632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-12 13:41:39.149990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-12 13:41:39.150055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-12 13:41:39.150410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.150477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.150798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.150862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.151159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.151222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.151613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.151679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.152020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.152087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.152446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.152901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.152975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.153378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.153443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.153799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.153863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.154220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.154283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.154725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.154793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.155148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.155215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.155588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.155656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.155991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.156056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.156444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.156510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.156898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.156962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.157346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.157411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.157765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.157831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.158217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.158281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.158698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.158763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.159168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.159231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.159595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.159664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.159976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.160043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.160440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.160506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.160848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.161229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.161296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.161651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.161715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.162068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.162133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.162491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.162557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.162950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.163013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.163397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.163463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.163813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.163878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.164266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.164345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.164750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.164815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.165204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.165268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.165592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.165659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.166000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.166064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.166405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.166470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.166859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.166922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-12 13:41:39.167233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-12 13:41:39.167297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.167662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.167726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.168132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.168195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.168601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.168668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.169057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.169120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.169480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.169545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.169924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.169987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.170344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.170423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.170770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.170834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.171188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.171251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.171671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.171737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.172125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.172189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.172533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.172600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.172945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.173012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.173357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.173424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.173813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.173877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.174241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.174307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.174706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.174770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.175151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.175215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.175594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.175660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.176007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.176075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.176404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.176480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.176846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.176913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.177266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.177346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.177742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.177806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.178145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.178209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.178614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.178681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.179068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.179132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.179448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.179516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.179875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.179939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.180255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.180700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.180765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.181077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.181141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.181538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.181603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.181930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.181998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.182328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.182394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.182737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.182804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.183167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.183231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.183678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.183746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.184111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.184175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-12 13:41:39.184478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-12 13:41:39.184545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.184940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.185005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.185376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.185836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.185901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.186273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.186359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.186719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.186783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.187160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.187224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.187616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.187691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.188049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.188112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.188460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.188525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.188871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.188937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.189290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.189370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.189730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.189793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.190153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.190216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.190580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.190648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.190971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.191038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.191436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.191502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.191895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.191959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.192362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.192430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.192827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.192891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.193240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.193307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.193753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.193820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.194172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.194235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.194667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.194735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.195112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.195176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.195543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.195608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.195988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.196052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.196438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.196502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.196862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.196926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.197281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.197358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.197683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.197746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.198080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.198144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.198462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.198530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.198864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.198928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.199295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.199391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.199737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.199801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.200148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.200212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.200575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.200643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.201048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.201113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.201504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.201591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-12 13:41:39.201976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-12 13:41:39.202041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.202396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.202463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.202825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.202889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.203241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.203308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.203721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.203785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.204179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.204242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.204640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.204705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.205050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.205127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.205523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.205588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.205945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.206012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.206412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.206477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.206884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.206949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.207368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.207435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.207833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.208280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.208364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.208765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.208830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.209227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.209290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.209663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.209730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.210079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.210144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.210527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.210592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.210979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.211043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.211456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.211521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.211867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.211931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.212252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.212329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.212734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.212798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.213159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.213225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.213592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.213659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.214004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.214067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.214448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.214913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.214977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.215372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.215436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.215792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.215856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.216184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.216248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.216623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.216691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.217056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.217122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.217504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.217570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.217934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.217998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.218343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.218412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.218749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.218812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.219206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.219270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-12 13:41:39.219686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-12 13:41:39.219751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.220121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.220185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.220584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.220650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.221036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.221100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.221494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.221916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.221984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.222346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.222414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.222772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.222847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.223203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.223270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.223700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.223766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.224115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.224183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.224581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.224648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.225033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.225097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.225445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.225515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.225881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.225945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.226287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.226371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.226754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.226818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.227157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.227220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.227558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.227626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.227986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.228051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.228412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.228478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.228839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.228907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.229285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.229367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.229766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.229831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.230228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.230292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.230666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.230734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.231067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.231131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.231499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.231565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.231872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.231936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.232247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.232310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.232693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.232757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.233069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.233132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.233509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-12 13:41:39.233574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-12 13:41:39.233908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.233972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.234336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.234403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.234740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.234805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.235166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.235231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.235569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.235633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.235983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.236050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.236403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.236467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.236821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.236885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.237228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.237294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.237645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.237709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.238052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.238116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.238505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.238571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.238949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.239012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.239381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.239446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.239750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.239826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.240208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.240272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.240690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.240756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.241150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.241214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.241579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.241647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.242002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.242066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.242432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.242497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.242847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.242911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.243255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.243336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.243730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.243795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.244140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.244202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.244510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.244575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.244935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.244999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.245359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.245427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.245774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.245840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.246224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.246287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.246667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.246734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.247125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.247189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.247541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.247606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.247975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.248039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.248374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.248441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.248814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.248880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.249207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.249271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.249607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.249671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.250031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.250095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.988 [2024-07-12 13:41:39.250455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.988 [2024-07-12 13:41:39.250522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.988 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.250895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.250960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.251337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.251402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.251784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.252170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.252234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.252646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.252712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.253057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.253124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.253477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.253543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.253871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.253934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.254267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.254346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.254704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.254772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.255175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.255238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.255635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.255701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.256004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.256066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.256431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.256496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.256800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.256873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.257223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.257285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.257678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.257743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.258101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.258165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.258552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.258618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.258929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.258996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.259343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.259407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.259775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.259838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.260193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.260256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.260621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.260688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.261082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.261146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.261498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.261564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.261911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.261974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.262302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.262380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.262808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.262874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.263226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.263292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.263662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.263726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.264046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.264112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.264518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.264584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.264949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.265016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.265362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.265430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.265798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.265861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.266251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.266333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.266704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.266768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.267126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.989 [2024-07-12 13:41:39.267189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.989 qpair failed and we were unable to recover it. 00:33:41.989 [2024-07-12 13:41:39.267570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.267635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.268024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.268087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.268439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.268504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.268870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.268933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.269285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.269363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.269717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.269782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.270150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.270213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.270575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.270642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.270990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.271056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.271425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.271492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.271889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.271952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.272267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.272345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.272735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.272799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.273155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.273217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.273659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.273727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.274054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.274128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.274500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.274565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.274944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.275007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.275411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.275476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.275876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.275940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.276282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.276364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.276685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.276750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.277133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.277197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.277571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.277636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.278026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.278091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.278475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.278544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.278912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.278978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.279337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.279405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.279717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.279783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.280184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.280248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.280647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.280712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.281097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.281161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.281510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.281574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.281968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.282032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.282435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.282500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.282880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.282943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.283304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.283382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.283698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.283761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.284112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.284176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.990 [2024-07-12 13:41:39.284579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.990 [2024-07-12 13:41:39.284644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.990 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.285005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.285415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.285480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.285847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.285910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.286251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.286344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.286735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.286799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.287190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.287254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.287618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.287682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.288031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.288094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.288485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.288549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.288905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.288969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.289360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.289427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.289777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.289843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.290202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.290266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.290639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.290705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.291058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.291124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.291474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.291551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.291901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.291966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.292349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.292414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.292794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.292857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.293174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.293238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.293598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.293664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.293988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.294399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.294466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.294849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.295266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.295346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.295698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.295763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.296155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.296219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.296617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.296682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.297072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.297135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.297498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.297564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.297871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.297940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.298344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.298409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.298774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.298838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.299191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.299258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.991 [2024-07-12 13:41:39.299591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.991 [2024-07-12 13:41:39.299658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.991 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.300048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.300113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.300489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.300552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.300903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.300968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.301258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.301342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.301732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.301797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.302181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.302622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.302686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.303043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.303110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.303500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.303567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.303936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.304000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.304391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.304455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.304798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.304864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.305261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.305342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.305702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.305768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.306157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.306220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.306610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.306675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.307025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.307089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.307454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.307519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.307879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.308298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.308376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.308732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.308798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.309162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.309226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.309584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.309648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.310042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.310106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.310499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.310565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.310879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.310943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.311341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.311406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.311757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.311820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.312172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.312240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.312610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.312676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.313069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.313133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.313526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.313592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.313970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.314034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.314381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.314448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.314818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.314883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.315242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.315306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.315705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.315768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.316164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.316227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.316549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.316614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.992 [2024-07-12 13:41:39.316937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.992 [2024-07-12 13:41:39.317003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.992 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.317389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.317454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.317807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.317873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.318226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.318291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.318666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.318730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.319122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.319185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.319570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.319635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.319983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.320459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.320536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.320871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.320939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.321291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.321373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.321716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.321782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.322173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.322237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.322680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.322748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.323107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.323179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.323584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.324004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.324067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.324455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.324521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.324910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.324975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.325295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.325377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.325688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.325755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.326156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.326219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.326646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.326712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.327060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.327518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.327583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.327941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.328004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.328397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.328462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.328809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.328875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.329208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.329272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.329633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.329701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.330093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.330157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.330521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.330586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.330933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.330995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.331347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.331411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.331748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.331811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.332220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.332284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.332653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.332717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.333083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.333147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.333493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.333560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.333949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.334013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.993 [2024-07-12 13:41:39.334399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.993 [2024-07-12 13:41:39.334465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.993 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.334849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.335259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.335340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.335654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.335723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.336072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.336140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.336530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.336596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.336944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.337011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.337361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.337429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.337788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.337862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.338209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.338271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.338657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.338722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.339115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.339178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.339563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.339628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.339976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.340039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.340436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.340847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.340914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.341327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.341393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.341761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.341824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.342135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.342200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.342630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.342698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.343094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.343158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.343528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.343594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.344008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.344072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.344417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.344483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.344844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.344909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.345251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.345331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.345733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.345798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.346197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.346261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.346624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.346689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.347070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.347134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.347452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.347518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.347909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.347973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.348331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.348399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.348775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.348840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.349225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.349291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.349668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.349732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.350052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.350116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.350506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.350572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.350957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.351021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.351403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.351469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.351810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.994 [2024-07-12 13:41:39.351875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.994 qpair failed and we were unable to recover it. 00:33:41.994 [2024-07-12 13:41:39.352217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.352283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.352657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.352724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.353115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.353178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.353530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.353598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.353984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.354050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.354357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.354424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.354763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.354828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.355190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.355265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.355660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.355728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.356086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.356152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.356505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.356572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.356933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.356997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.357299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.357379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.357768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.357832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.358177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.358240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.358571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.358638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.359005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.359070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.359388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.359454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.359759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.359827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.360237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.360301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.360714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.360779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.361144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.361207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.361590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.361657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.362050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.362115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.362472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.362540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.362937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.363001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.363361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.363770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.363834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.364217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.364281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.364694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.364760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.365158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.365221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.365580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.365644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.365990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.366056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.366429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.366495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.366890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.366955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.367348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.367413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.367777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.995 [2024-07-12 13:41:39.367841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.995 qpair failed and we were unable to recover it. 00:33:41.995 [2024-07-12 13:41:39.368240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.368303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.368682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.368747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.369103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.369170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.369518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.369583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.369967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.370032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.370380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.370446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.370803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.370865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.371168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.371609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.371677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.372043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.372107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.372501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.372923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.372988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.373346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.373793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.373856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.374168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.374232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.374558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.374625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.375013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.375077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.375415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.375480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.375828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.375892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.376273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.376350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.376727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.376790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.377139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.377611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.377678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.378035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.378101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.378504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.378570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.378948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.379014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.379364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.379431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.379790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.379856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.380160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.380227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.380614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.380680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.380985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.381052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.381438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.381503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.381849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.381913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.382296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.382374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.382735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.382801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.383185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.383250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.383606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.383674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.996 qpair failed and we were unable to recover it. 00:33:41.996 [2024-07-12 13:41:39.384082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.996 [2024-07-12 13:41:39.384147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.384494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.384559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.384948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.385012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.385410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.385474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.385804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.385868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.386230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.386297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.386698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.387151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.387214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.387608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.387672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.388057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.388121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.388520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.388585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.388908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.388972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.389346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.389410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.389773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.389846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.390194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.390257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.390615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.390680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.391075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.391138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.391523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.391588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.391968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.392032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.392379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.392444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.392829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.392892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.393286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.393363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.393763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.393829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.394210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.394274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.394667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.394731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.395117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.395181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.395518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.395584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.395993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.396058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.396401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.396470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.396828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.396893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.397278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.397357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.397711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.397777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.398150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.398215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.398623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.398690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.399086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.399150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.399518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.399935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.400002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.400363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.400428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.400780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.400843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.401207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.401274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.997 [2024-07-12 13:41:39.401626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.997 [2024-07-12 13:41:39.401690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.997 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.402074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.402137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.402473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.402540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.402883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.402946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.403294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.403388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.403787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.403852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.404173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.404571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.404635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.405016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.405079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.405470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.405535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.405886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.405953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.406342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.406407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.406829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.406895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.407241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.407348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.407756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.407821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.408184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.408247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.408614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.408678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.409041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.409109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.409463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.409550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.409882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.409946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.410343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.410408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.410764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.410830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.411218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.411282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.411729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.411795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.412125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.412191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.412581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.412647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.413001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.413065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.413437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.413503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.413853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.413916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.414261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.414340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.414704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.414768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.415127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.415191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.415582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.415646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.415996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.416062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.416385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.416450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.416760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.416827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.417211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.417274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.417708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.417774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.418168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.418231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.418613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.418678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.998 [2024-07-12 13:41:39.419050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.998 [2024-07-12 13:41:39.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.998 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.419465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.419530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.419883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.419948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.420348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.420413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.420812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.420877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.421234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.421301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.421642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.421705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.422059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.422126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.422529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.422595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.422970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.423037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.423439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.423504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.423864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.423928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.424292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.424372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.424687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.424761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.425108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.425170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.425527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.425595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.425992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.426056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.426404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.426822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.426888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.427274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.427362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.427753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.427816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.428158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.428224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.428639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.428706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.429103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.429167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.429518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.429582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.429978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.430042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.430371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.430435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.430846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.430910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.431258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.431338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.431731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.431796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.432150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.432214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.432567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.432631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.432979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.433044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.433395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.433461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.433813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.433877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.434224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.434288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.434645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.434710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.435054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.435117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.435508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.435573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.435959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.436023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:41.999 [2024-07-12 13:41:39.436384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.999 [2024-07-12 13:41:39.436449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:41.999 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.436741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.436808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.437205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.437268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.437637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.437699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.438098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.438162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.438541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.438607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.438923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.438989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.439379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.439445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.439831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.439893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.440242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.440309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.440677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.441148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.441584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.441649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.442048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.442122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.442517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.442583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.442932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.442996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.443369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.443434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.443830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.443894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.000 [2024-07-12 13:41:39.444218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.000 [2024-07-12 13:41:39.444281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.000 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.444680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.444745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.445127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.445190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.445538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.445602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.445983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.446047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.446404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.446469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.446849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.446913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.447261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.447346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.447714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.447780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.448183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.448247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.448583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.448650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.449042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-12 13:41:39.449106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-12 13:41:39.449486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.449551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.449893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.449960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.450362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.450429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.450812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.450876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.451222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.451287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.451646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.451709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.452102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.452165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.452480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.452546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.452896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.452963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.453331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.453397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.453793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.453858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.454261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.454352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.454670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.454733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.455092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.455158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.455566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.455632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.456023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.456087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.456483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.456547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.456948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.457012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.457329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.457393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.457754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.457817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.458178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.458242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.458623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.458688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.459034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.459100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.459485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.459562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.459960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.460023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.460411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.460476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.460827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.460891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.461240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.461303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.461642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.461709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.462061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.462125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.462447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.462515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.462874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.462938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.463296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.463376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.463736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.463800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.464184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.464249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.464683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.464750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.465111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.465175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.465559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.465625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.465981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-12 13:41:39.466045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-12 13:41:39.466437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.466502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.466856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.466920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.467228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.467295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.467704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.468121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.468184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.468578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.468643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.468998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.469062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.469450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.469515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.469841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.469908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.470258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.470337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.470703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.470766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.471169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.471233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.471615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.471679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.472001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.472069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.472428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.472493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.472829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.472895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.473221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.473285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.473686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.473752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.474095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.474158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.474469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.474537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.474903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.474968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.475286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.475375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.475732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.475796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.476181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.476246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.476645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.476720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.477120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.477184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.477509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.477968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.478032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.478391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.478456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.478923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.478986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.479383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.479448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.479804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.479868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.480232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.480295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.480632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.480697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.481084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.481148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.481542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.481608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.481961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.482024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.482373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.482438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.482791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.482855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.483237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-12 13:41:39.483301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-12 13:41:39.483652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.483716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.484079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.484142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.484500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.484565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.484900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.484963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.485280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.485361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.485714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.485778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.486104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.486168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.486516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.486582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.486932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.486999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.487354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.487419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.487741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.487805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.488140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.488208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.488569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.488636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.488994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.489059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.489408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.489473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.489871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.489936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.490359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.490424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.490821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.490887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.491236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.491303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.491678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.491742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.492201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.492568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.492635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.492985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.493048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.493359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.493425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.493756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.494182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.494246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.494596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.494662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.495015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.495079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.495445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.495511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.495845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.495909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.496264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.496340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.496695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.496759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.497097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.497161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.497589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.497938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.498005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.498349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.498415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.498767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.498832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.499150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.499215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.499596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.499661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.500055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-12 13:41:39.500119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-12 13:41:39.500483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.500547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.500872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.500936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.501257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.501334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.501649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.501715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.502102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.502167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.502556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.502624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.503036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.503100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.503450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.503518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.503876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.503941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.504297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.504376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.504796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.504863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.505191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.505256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.505606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.505671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.506023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.506090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.506491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.506556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.506869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.506932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.507335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.507401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.507724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.507787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.508136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.508202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.508564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.508630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.508970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.509033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.509385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.509452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.509776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.509840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.510155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.510221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.510608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.510684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.511077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.511142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.511493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.511559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.511886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.511953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.512310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.512389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.512718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.512779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.513167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.513229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.513704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.513805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.514236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.514307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.514724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.514790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.515138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.515201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.515554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.515619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.515972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.516036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.516427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.516491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-12 13:41:39.516825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-12 13:41:39.516890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.517247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.517330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.517722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.517785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.518113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.518177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.518562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.518628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.519013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.519077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.519438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.519504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.519856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.519924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.520248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.520312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.520711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.520774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.521093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.521160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.521557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.521623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.521970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.522034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.522367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.522433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.522754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.522818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.523162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.523585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.523650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.524046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.524110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.524473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.524540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.524906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.524970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.525337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.525403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.525758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.525821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.526169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.526235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.526668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.526736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.527133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.527197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.527594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.527661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.528011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.528087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.528483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.528548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.528926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.528989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.529332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.529397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.529795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.529860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.530186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.530250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.530589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.530654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.531012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.531076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-12 13:41:39.531414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-12 13:41:39.531479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.531829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.531893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.532244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.532308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.532676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.532740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.533092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.533155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.533480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.533545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.533903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.533967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.534330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.534396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.534733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.534795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.535143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.535207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.535582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.536042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.536107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.536472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.536542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.536869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.536934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.537292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.537371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.537689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.537753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.538056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.538119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.538492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.538557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.538914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.538979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.539341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.539421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.539731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.539798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.540155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.540223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.540636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.540702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.541074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.541138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.541524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.541590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.541983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.542046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.542402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.542467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.542835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.542898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.543239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.543303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.543704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.543771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.544122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.544187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.544547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.544614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.544927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.544993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.545328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.545396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.545743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.545807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.546151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.546214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.546565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.546630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.547021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.547084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.547408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.547473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.547851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.547915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.548271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-12 13:41:39.548359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-12 13:41:39.548677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.548744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.549137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.549202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.549567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.549632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.549983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.550047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.550405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.550474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.550791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.550859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.551241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.551306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.551716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.551780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.552168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.552231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.552601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.552666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.552981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.553045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.553401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.553469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.553856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.553921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.554274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.554349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.554713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.554777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.555132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.555196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.555517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.555582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.555902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.555966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.556351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.556427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.556751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.556818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.557173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.557237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.557611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.557676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.558022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.558473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.558539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.558922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.558986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.559295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.559378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.559739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.559804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.560161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.560227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.560631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.560697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.561040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.561104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.561466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.561532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.561886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.561952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.562328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.562397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.562759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.562823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.563174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.563238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.563621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.563687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.564005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.564068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.564417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.564485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.564834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.564900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.565203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-12 13:41:39.565269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-12 13:41:39.565641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.565706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.566048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.566502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.566567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.566948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.567013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.567400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.567464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.567811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.567875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.568225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.568288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.568663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.568726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.569112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.569176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.569502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.569568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.569877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.569942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.570305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.570385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.570729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.570792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.571177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.571241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.571630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.571695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.572043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.572107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.572487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.572551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.572921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.572985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.573367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.573442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.573758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.573825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.574183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.574247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.574612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.574676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.574999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.575066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.575395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.575463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.575821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.575884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.576252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.576330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.576716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.576780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.577164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.577227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.577659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.577727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.578095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.578158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.578513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.578581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.578903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.578969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.579369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.579436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.579790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.579854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.580215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.580280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.580649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.580712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.581064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.581127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.581508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.581573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.581936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.581999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.582347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-12 13:41:39.582411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-12 13:41:39.582781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.582846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.583191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.583254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.583666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.583732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.584098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.584162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.584510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.584576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.584901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.584969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.585353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.585419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.585781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.585845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.586173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.586237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.586599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.586663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.586978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.587042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.587416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.587481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.587827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.587894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.588205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.588272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.588641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.588705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.589069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.589132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.589528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.589593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.589943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.590007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.590355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.590432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.590826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.590891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.591204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.591271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.591672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.591737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.592102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.592166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.592518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.592583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.592897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.592964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.593388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.593454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.593835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.593899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.594212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.594281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.594667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.594732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.595103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.595167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.595525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.595591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.595952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.596016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.596374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-12 13:41:39.596439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-12 13:41:39.596813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.596880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.597243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.597306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.597677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.597741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.598094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.598162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.598506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.598570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.598907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.598971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.599306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.599544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.599937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.600000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.600396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.600461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.600850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.600914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.601260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.601337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.601649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.601713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.602112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.602175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.602589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.602656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.603041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.603105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.603455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.603521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.603865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.603928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.604269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.604346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.604690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.604753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.605106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.605173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.605581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.605647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.606037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.606102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.606454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.606520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.606926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.606991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.607389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.607454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.607804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.607881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.608279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.608358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.608684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.608747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.609130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.609193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.609583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.609648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.610006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.610072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.610477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.610541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.610898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.610962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.611307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.611399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.611754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.611818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.612179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.612243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.612622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.612687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.612990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.613054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.613434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.613501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.613898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-12 13:41:39.613962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-12 13:41:39.614279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.614359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.614759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.614823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.615178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.615243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.615647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.615712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.616097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.616161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.616484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.616553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.616935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.616999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.617373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.617439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.617824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.617889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.618246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.618311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.618693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.618757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.619145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.619209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.619589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.619655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.620002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.620069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.620458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.620853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.620919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.621310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.621389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.621785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.621849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.622259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.622336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.622771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.622838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.623213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.623278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.623691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.623755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.624081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.624148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.624547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.624614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.624940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.625003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.625354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.625437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.625838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.626273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.626355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.626682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.627141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.627205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.627615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.627682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.628068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.628133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.628476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.628542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.628932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.628996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.629368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.629434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.629816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.629880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.630233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.630296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.630670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.631091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.631155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-12 13:41:39.631483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-12 13:41:39.631551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.631896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.631963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.632311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.632406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.632737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.632800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.633157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.633221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.633635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.633701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.634061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.634125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.634505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.634570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.634924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.634992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.635348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.635412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.635760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.635824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.636222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.636286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.636654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.636717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.637090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.637154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.637505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.637570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.637907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.637970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.638330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.638395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.638706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.638769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.639114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.639178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.639581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.639646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.640041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.640105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.640479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.640548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.640928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.640993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.641380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.641446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.641825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.641889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.642253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.642330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.642752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.642829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.643184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.643247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.643609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.643674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.644030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.644093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.644488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.644553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.644956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.645020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.645409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.645474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.645819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.645883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.646263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.646338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.646707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.646774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-12 13:41:39.647098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-12 13:41:39.647162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.647549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.647616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.647970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.648034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.648402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.648467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.648873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.648936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.649287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.649374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.649731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.649795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.650150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.650213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.650619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.650686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.651036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.651103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.651474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.651540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.651895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.651958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.652357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.652422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.652767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.652834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.653188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.653252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.653610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.653676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.654004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.654068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.654472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.654538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.654890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.654957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.655279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.655372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.655773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.655838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.656183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.656250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.656634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.656700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.657084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.657147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.657480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.657548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.657912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.657977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.658360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.658424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.658815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.658879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.659206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.659269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.659656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.659720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.660083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.660157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.660525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.660591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.660977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.661041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.661428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.661494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.661884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-12 13:41:39.661948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-12 13:41:39.662304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.662384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.662808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.662875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.663262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.663353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.663744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.663807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.664156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.664219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.664582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.664648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.664994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.665058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.665425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.665491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.665809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.665872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.666194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.666262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.666623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.666689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.667081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.667144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.667493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.667558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.667913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.667977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.668377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.668441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.668818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.668882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.669276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.669353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.669708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.669771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.670153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.670217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.670570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.670634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.671017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.671080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.671485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.671551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.671965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.672030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.672397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.672463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.672812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.672876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.673182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.673246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.673645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.673710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.674096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.674160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.674521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.674586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.674968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.675033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.675427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.675493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.675848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.675914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-12 13:41:39.676309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-12 13:41:39.676388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.676704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.676772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.677156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.677220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.677620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.677695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.678088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.678151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.678536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.678602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.679005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.679069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.679466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.679532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.679888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.679952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.680361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.680426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.680811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.680874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.681253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.681331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.681733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.681798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.682192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.682256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.682624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.682688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.683077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.683140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.683497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.683562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.683930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.683994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.684338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.684405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.684756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.684819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.685138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.685202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.685546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.685610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.685991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.686055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.686416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.686481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.686855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.686919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.687277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.687368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.687694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.687758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.688098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.688165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.688485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.688553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.688898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.688963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.689283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.689371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.689732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.689800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.690198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.690262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-12 13:41:39.690637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-12 13:41:39.690702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.691032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.691485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.691550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.691868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.691934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.692342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.692407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.692764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.692829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.693211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.693274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.693636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.693700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.694065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.694129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.694448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.694515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.694884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.694961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.695211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.695244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.695446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.695480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.695673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.695707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.695896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.695930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.696200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.696268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.696551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.696585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.696842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.696907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.697257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.697335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.697552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.697585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.697903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.697969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.698374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.698409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.698621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.698687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.699043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.699110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.699383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.699417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.699619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.700077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.700139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.700395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.700430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.700718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.700782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.701170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.701232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.701502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.701536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.701725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.701761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.702007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.702070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.702357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.702391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.702648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.702711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.703049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.703112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.703353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.703387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.703578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.703651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-12 13:41:39.703997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-12 13:41:39.704063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.704384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.704419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.704698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.704761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.705083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.705145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.705374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.705410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.705613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.705646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.705837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.705870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.706272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.706352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.706634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.706697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.707092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.707154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.707407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.707440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.707674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.707738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.708080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.708162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.708487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.708552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.708932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.708996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.709386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.709451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.709833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.709897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.710210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.710278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.710720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.710786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.711159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.711223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.711597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.711662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.712020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.712086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.712452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.712519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.712908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.712973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.713345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.713411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.713733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.713796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.714196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.714260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.714640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.714705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.715099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.715162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.715545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.715609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.715971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.716034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.716391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.716459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.716809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.716873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-12 13:41:39.717233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-12 13:41:39.717296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.717635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.718091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.718156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.718548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.718613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.718966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.719029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.719379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.719444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.719808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.719875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.720224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.720287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.720657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.720720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.721070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.721136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.721542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.721608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.721927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.721995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.722377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.722442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.722760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.722827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.723177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.723244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.723626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.723692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.724050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.724115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.724509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.724574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.724964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.725423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.725498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.725851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.725917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.726270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.726349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.726721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.726786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.727148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.727212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.727605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.727671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.728040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.728104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.728442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.728508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.728806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.728874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.729277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.729357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.729741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.729809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.730169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.730233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.730598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.730663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-12 13:41:39.731017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-12 13:41:39.731080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-12 13:41:39.731419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-12 13:41:39.731484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-12 13:41:39.731853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-12 13:41:39.731919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-12 13:41:39.732233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-12 13:41:39.732297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-12 13:41:39.732671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-12 13:41:39.732735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.733087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.733153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.733463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.733532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.733892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.733958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.734300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.734377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.734743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.734807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.735134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.735200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.735529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.735597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.735965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.736030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.736390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.736456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.736850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.736914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.737223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.737287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.737628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.737691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.738031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.738095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.738434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.738500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.738884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.738947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.739339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.739404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.739718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.739785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.740083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.740148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.740468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.740534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.740910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.740975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.741345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.741410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.741808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.741874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.742189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.742266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.742646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.742713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.743063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.743127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.743508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.743573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.743933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.743998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.744305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.744393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.744752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.744815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.745162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.745226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.559 [2024-07-12 13:41:39.745586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.559 [2024-07-12 13:41:39.745651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.559 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.745976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.746430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.746496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.746869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.746933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.747288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.747365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.747715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.747779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.748112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.748176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.748567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.748633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.748954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.749021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.749355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.749421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.749773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.749837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.750140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.750208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.750595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.750661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.751044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.751108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.751474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.751539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.751855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.751918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.752257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.752337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.752662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.752727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.753077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.753141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.753537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.753613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.753968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.754035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.754390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.754455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.754805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.755202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.755268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.755630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.755695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.756048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.756112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.756469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.756536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.756896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.756961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.757332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.757397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.757717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.757780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.758119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.758182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.758560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.758625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.759021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.759086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.759420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.560 qpair failed and we were unable to recover it. 00:33:42.560 [2024-07-12 13:41:39.759846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.560 [2024-07-12 13:41:39.759911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.760269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.760349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.760666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.760733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.761051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.761116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.761439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.761504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.761814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.761881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.762266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.762346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.762674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.762741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.763099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.763163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.763515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.763580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.763938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.764002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.764354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.764422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.764780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.764846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.765198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.765266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.765635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.765703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.766027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.766090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.766436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.766504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.766897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.766961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.767368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.767433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.767817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.767881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.768229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.768292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.768672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.768737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.769120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.769184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.769537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.769602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.769987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.770051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.770454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.770861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.771240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.771308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.771718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.771786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.772158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.772222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.772585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.772651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.773009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.773072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.773466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.773531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.773892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.773955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.774295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.774373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.774740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.774805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.775157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.775221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.775567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.775633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.561 qpair failed and we were unable to recover it. 00:33:42.561 [2024-07-12 13:41:39.775979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.561 [2024-07-12 13:41:39.776043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.776408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.776474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.776910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.776972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.777292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.777370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.777714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.777777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.778143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.778205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.778555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.778622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.778966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.779031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.779369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.779435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.779782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.779845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.780187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.780254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.780595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.780985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.781050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.781408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.781475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.781849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.781912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.782297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.782378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.782750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.782814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.783155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.783217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.783548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.783613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.783997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.784062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.784467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.784532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.784882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.784950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.785272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.785349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.785750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.785815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.786179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.786243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.786595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.786661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.787069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.787133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.787522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.787597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.787969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.788035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.788431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.788497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.788844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.788908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.789257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.789351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.789676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.789740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.790075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.790137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.790487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.790554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.790898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.790962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.791294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.562 [2024-07-12 13:41:39.791376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.562 qpair failed and we were unable to recover it. 00:33:42.562 [2024-07-12 13:41:39.791770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.791834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.792201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.792264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.792676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.792742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.793139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.793203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.793552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.793619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.793972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.794036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.794386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.794451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.794796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.794859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.795166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.795233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.795555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.795621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.796002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.796065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.796409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.796474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.796831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.796897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.797250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.797340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.797702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.797769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.798116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.798179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.798524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.798591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.798948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.799015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.799347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.799412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.799768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.799831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.800150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.800213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.800546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.800613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.800919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.800983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.801329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.801395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.801813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.801879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.802240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.802303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.802644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.802709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.803016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.803083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.803414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.803480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.803871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.563 [2024-07-12 13:41:39.803933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.563 qpair failed and we were unable to recover it. 00:33:42.563 [2024-07-12 13:41:39.804274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.804360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.804684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.804748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.805097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.805160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.805553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.805619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.805972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.806035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.806403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.806472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.806829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.806893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.807233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.807296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.807734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.807800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.808168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.808232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.808600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.808669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.809052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.809115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.809465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.809533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.809911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.809975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.810344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.810408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.810730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.810794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.811150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.811213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.811619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.811686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.812042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.812106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.812485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.812551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.812908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.812973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.813363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.813428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.813754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.813817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.814175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.814239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.814610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.815007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.815075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.815469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.815535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.815934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.815999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.816353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.816418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.816757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.816824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.817217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.817281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.817669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.817733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.818118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.818182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.818535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.818602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.818944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.819008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.819362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.819427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.819794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.819857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.820240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.820304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.820660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-12 13:41:39.820725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.564 qpair failed and we were unable to recover it. 00:33:42.564 [2024-07-12 13:41:39.821111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.821175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.821509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.821585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.821941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.822008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.822354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.822421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.822812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.822877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.823262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.823339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.823697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.823761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.824127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.824190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.824544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.824609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.824959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.825027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.825416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.825854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.825918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.826299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.826379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.826726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.826789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.827142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.827205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.827609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.827674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.828021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.828087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.828399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.828464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.828851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.828915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.829303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.829381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.829741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.829805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.830167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.830231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.830632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.830696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.831046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.831109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.831467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.831535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.831921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.832371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.832436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.832789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.832853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.833178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.833616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.834047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.834111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.834497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.834562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.834920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.834983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.565 [2024-07-12 13:41:39.835368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.565 [2024-07-12 13:41:39.835434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.565 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.835790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.835854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.836171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.836237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.836615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.836683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.837045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.837108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.837441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.837509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.837903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.837968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.838329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.838394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.838781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.838856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.839237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.839713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.839779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.840131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.840198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.840540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.840604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.840987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.841051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.841439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.841504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.841894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.841958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.842256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.842694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.842761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.843157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.843221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.843543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.843612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.843981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.844412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.844477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.844844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.845232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.845296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.845749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.845815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.846171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.846238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.846597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.846665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.847023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.847086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.847439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.847504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.847887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.847951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.848296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.848374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.848719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.848785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.849179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.849242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.849627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.849692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.850039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.850105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.566 [2024-07-12 13:41:39.850480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.566 [2024-07-12 13:41:39.850549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.566 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.850944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.851008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.851394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.851459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.851844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.851908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.852260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.852348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.852746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.852810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.853193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.853256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.853661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.853727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.854064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.854128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.854482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.854549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.854910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.854975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.855312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.855392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.855714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.855780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.856171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.856246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.856624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.856689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.857002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.857066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.857387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.857455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.857802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.857868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.858221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.858285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.858655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.859069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.859132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.859494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.859559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.859912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.859976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.860370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.860434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.860786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.860850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.861205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.861267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.861649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.861713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.862108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.862172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.862554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.862619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.862938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.863005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.863337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.863402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.863793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.863856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.864208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.864271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.864655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.864719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.865077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.865140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.865485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.865552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.865912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.865977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.866339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.567 [2024-07-12 13:41:39.866403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.567 qpair failed and we were unable to recover it. 00:33:42.567 [2024-07-12 13:41:39.866784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.866848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.867203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.867268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.867706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.867773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.868157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.868222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.868613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.868677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.868986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.869049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.869401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.869466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.869803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.869867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.870206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.870270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.870633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.870699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.871094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.871157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.871547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.871612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.871980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.872044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.872444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.872509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.872862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.872925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.873276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.873367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.873723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.873789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.874185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.874248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.874568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.874639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.874981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.875046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.875366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.875434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.875824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.876245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.876308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.876688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.876752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.877107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.877173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.877528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.877594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.877991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.878055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.878456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.878520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.878861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.878925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.879255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.879332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.568 [2024-07-12 13:41:39.879693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.568 [2024-07-12 13:41:39.879759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.568 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.880145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.880209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.880588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.880654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.881035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.881098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.881420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.881485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.881840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.881905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.882261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.882339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.882719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.882782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.883128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.883190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.883549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.883614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.887682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.887783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.888206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.888275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.888654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.888721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.889030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.889099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.889452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.889521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.889917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.889981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.890345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.890410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.890809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.890875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.891275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.891354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.891686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.891753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.892126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.892536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.892602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.892966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.893030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.893426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.893495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.893894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.893959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.894327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.894409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.894803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.894867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.895259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.895337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.895690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.896156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.896220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.896587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.896652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.897032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.897096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.897464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.897528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.897878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.569 [2024-07-12 13:41:39.897945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.569 qpair failed and we were unable to recover it. 00:33:42.569 [2024-07-12 13:41:39.898345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.898410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.898808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.898873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.899185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.899252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.899690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.899757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.900107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.900171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.900544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.900613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.900972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.901035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.901426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.901491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.901838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.901904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.902261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.902343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.902705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.902769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.903125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.903188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.903554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.903620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.903970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.904034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.904388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.904452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.904767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.904834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.905147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.905212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.905645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.905713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.906123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.906188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.906582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.906647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.907031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.907095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.907482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.907547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.907909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.907971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.908357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.908422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.908814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.908879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.909263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.909346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.909711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.909774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.910132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.910195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.910584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.910649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.910976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.911043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.911427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.911492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.911849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.911923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.912288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.912366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.912720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.912786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.913126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.913193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.913549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.913614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.570 qpair failed and we were unable to recover it. 00:33:42.570 [2024-07-12 13:41:39.914012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.570 [2024-07-12 13:41:39.914075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.914460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.914527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.914844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.914909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.915256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.915335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.915724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.915788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.916101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.916166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.916527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.916592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.916969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.917032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.917390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.917458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.917853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.917918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.918307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.918385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.918737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.918801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.919206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.919669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.919733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.920099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.920163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.920549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.920614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.921012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.921076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.921476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.921540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.921863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.921927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.922273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.922353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.922694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.922756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.923105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.923169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.923573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.923639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.924025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.924088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.924475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.924540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.924933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.924996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.925364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.925433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.925796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.925861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.926206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.926272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.926651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.926716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.927108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.927171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.927523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.927588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.927955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.928019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.928365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.928432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.928798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.928861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.929164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.929242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.929663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.929730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.930114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.930178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.930531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.930597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.930957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.571 [2024-07-12 13:41:39.931021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.571 qpair failed and we were unable to recover it. 00:33:42.571 [2024-07-12 13:41:39.931364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.931430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.931737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.931803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.932187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.932251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.932661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.932727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.933127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.933191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.933586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.933652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.934006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.934069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.934422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.934487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.934800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.934867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.935225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.935702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.935767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.936123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.936189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.936543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.936609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.936964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.937028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.937383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.937449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.937830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.937894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.938277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.938368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.938722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.938785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.939108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.939170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.939522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.939590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.939942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.940007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.940330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.940394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.940799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.940865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.941267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.941343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.941743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.941807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.942197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.942261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.942630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.942694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.943049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.943113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.943502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.943568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.943868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.943933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.944330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.944397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.944749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.944816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.945152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.945617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.945683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.946066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.946130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.946485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.946560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.946879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.946946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.572 [2024-07-12 13:41:39.947345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.572 [2024-07-12 13:41:39.947411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.572 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.947767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.947830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.948215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.948279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.948693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.948759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.949157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.949586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.949650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.950015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.950079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.950427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.950493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.950875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.950938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.951253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.951329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.951679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.951748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.952103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.952170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.952529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.952594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.952978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.953041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.953396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.953461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.953814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.953880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.954232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.954295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.954723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.954787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.955101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.955163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.955549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.955614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.955970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.956033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.956380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.956445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.956834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.956897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.957287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.957365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.957665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.957728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.958116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.958180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.958531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.958597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.958993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.959057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.959400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.959465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.959852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.959915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.960284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.960362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.960715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.960778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.961163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.573 [2024-07-12 13:41:39.961227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.573 qpair failed and we were unable to recover it. 00:33:42.573 [2024-07-12 13:41:39.961588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.961657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.961973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.962040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.962432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.962497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.962880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.962945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.963292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.963371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.963694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.963768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.964154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.964218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.964582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.964647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.965031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.965095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.965476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.965541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.965938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.966002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.966358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.966424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.966734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.966798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.967143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.967207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.967566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.968017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.968080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.968476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.968889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.968953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.969342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.969407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.969822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.969887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.970281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.970369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.970690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.970756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.971073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.971137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.971520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.971586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.971931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.971994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.972343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.972407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.972740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.972803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.973155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.973222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.973575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.973643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.974002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.974066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.974412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.974864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.974928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.975340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.975416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.975795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.975858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.976209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.976276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.976627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.976691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.977080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.977145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.977526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.977591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.977914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.977978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.978369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.978435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.574 qpair failed and we were unable to recover it. 00:33:42.574 [2024-07-12 13:41:39.978777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.574 [2024-07-12 13:41:39.978840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.979195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.979261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.979581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.979647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.980028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.980090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.980478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.980544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.980896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.980959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.981333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.981400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.981758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.981821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.982188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.982252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.982620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.982684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.983003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.983066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.983460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.983525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.983840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.983904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.984289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.984366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.984709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.984773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.985152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.985216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.985546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.985612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.986012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.986076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.986426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.986492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.986863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.986928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.987274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.987356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.987676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.987743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.988106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.988171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.988494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.988559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.988870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.988938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.989348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.989417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.989772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.989834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.990190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.990254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.990603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.990668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.990995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.991059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.991420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.991486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.991831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.991895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.992254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.992342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.992656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.992723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.993083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.993149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.993515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.993582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.993927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.993995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.994346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.994413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.994810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.994875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.995215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.995278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-12 13:41:39.995649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-12 13:41:39.995716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.996034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.996098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.996466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.996532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.996886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.996954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.997306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.997385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.997739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.997803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.998200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.998265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.998657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.998761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.999183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.999253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:39.999628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:39.999695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.000021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.000416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.000491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.000877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.000946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.001329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.001393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.001765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.001827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.002151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.002214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.002561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.002627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.002976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.003042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.003464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.003528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.003978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.004074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.004456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.004529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.004896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.004967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.005334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.005402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.005742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.005806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.006168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.006233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.006582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.006651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.007001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.007065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.007446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.007514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.007872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.007938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.008302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.008393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.008729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.008795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.009129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.009195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.009555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.009635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.010020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.010086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.010444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.010510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.010828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.010895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.011272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.011304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.011484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.011517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.011739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-12 13:41:40.011824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-12 13:41:40.012118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.012184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.012414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.012447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.012632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.012665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.012828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.012861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.013019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.013053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.013338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.013398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.013569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.013602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.013867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.013933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.014279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.014373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.014547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.014579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.014794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.014826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.014989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.015022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.015209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.015241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.015407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.015440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.015623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.015688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.016015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.016080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.016419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.016462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.016659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.016715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.016931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.016972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.017177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.017223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.017420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.017462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.017661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.017723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.018021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.018062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.018331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.018373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.018786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.018831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.019126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.019209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.019467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.019513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.019750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.019816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.020134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.020172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-12 13:41:40.020406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-12 13:41:40.020450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.020736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.020783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.021007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.021053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.021273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.021327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.021801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.021896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.022987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.023031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.023218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.023268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.023444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.023481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.023683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.023718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.024057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.024090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.024432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.024471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.024707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.024777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.025206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.025278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.025467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.025505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.025780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.025852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.026168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.026204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.026419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.026446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.027454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.027485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.027630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.027656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.027787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.027830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.028023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.028052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.028219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.028247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.028413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.028440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.028626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.028680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.028949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.029003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.029167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.029372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.029399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.029583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.029627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.029830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.029883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.030048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.030092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.030601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.030816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.030842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.031007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.031032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.031161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.853 [2024-07-12 13:41:40.031185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.853 qpair failed and we were unable to recover it. 00:33:42.853 [2024-07-12 13:41:40.031313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.031347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.031484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.031513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.031667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.031692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.031849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.031883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.032914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.032939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.033094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.033119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.033270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.033479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.033505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.033643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.033672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.033711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13480 (9): Bad file descriptor 00:33:42.854 [2024-07-12 13:41:40.033941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.034004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.034249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.034294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.034519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.034556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.034804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.034910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.035174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.035227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.035476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.035522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.036669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.036701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.036947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.036979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.037746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.037777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.038073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.038125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.038313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.038370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.038511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.038536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.038677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.038704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.038860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.854 [2024-07-12 13:41:40.038885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.854 qpair failed and we were unable to recover it. 00:33:42.854 [2024-07-12 13:41:40.039037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.039221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.039413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.039577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.039749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.039922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.039947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.040973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.040998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.041968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.041993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.042171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.042196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.042340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.042367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.042522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.042548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.042726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.042752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.042879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.042904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.043937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.043961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.044127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.044164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.044348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.044378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.044519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.044547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.044712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.044739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.044895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.855 [2024-07-12 13:41:40.044921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.855 qpair failed and we were unable to recover it. 00:33:42.855 [2024-07-12 13:41:40.045072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.045099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.045257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.045283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.045464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.045490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.045646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.045671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.045833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.045858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.045986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.046184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.046401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.046553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.046760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.046940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.046975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.047126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.047149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.047325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.047352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.047497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.047522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.047653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.047691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.047851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.047877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.048861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.048885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.049057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.049082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.049214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.049240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.049993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.050224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.050410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.050564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.050765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.050940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.050965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.051156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.051202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.856 [2024-07-12 13:41:40.051393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.856 [2024-07-12 13:41:40.051421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.856 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.051559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.051585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.051757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.051783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.051942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.051968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.052113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.052139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.052270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.052296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.052446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.052472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.052645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.052672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.052876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.052906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.053088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.053118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.053329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.053375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.053505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.054398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.054432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.054575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.054616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.054833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.054876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.055089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.055134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.055340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.055386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.055522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.055548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.055722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.055749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.055950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.055980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.056156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.056182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.056358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.056385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.056541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.056568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.056758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.056802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.057905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.057949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.058168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.058195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.058347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.058375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.058530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.058555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.058692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.857 [2024-07-12 13:41:40.058719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.857 qpair failed and we were unable to recover it. 00:33:42.857 [2024-07-12 13:41:40.058894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.058920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.059171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.059202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.059381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.059408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.059563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.059588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.059751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.059777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.059902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.059928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.060088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.060115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.060277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.060309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.060465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.060490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.060652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.060693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.060934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.061148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.061179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.061411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.061438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.061570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.061597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.061778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.061824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.062041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.062072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.062239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.062268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.062435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.062463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.062623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.062650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.062832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.063024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.063055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.063289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.063328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.063474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.063500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.063686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.064005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.064059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.064325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.858 [2024-07-12 13:41:40.064356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.858 qpair failed and we were unable to recover it. 00:33:42.858 [2024-07-12 13:41:40.064524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.064550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.064703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.064730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.064997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.065051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.065224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.065254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.065501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.065528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.065698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.065724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.065850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.065877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.066068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.066098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.066252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.066282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.066449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.066476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.066637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.066663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.066826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.066852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.067870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.067896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.068036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.068066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.068241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.068271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.068440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.068465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.068633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.068664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.068849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.068875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.069086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.069117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.069266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.069303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.069487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.069513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.069674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.069699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.069887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.069929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.070192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.070223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.859 [2024-07-12 13:41:40.070393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.859 [2024-07-12 13:41:40.070419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.859 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.070570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.070611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.070801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.070828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.071068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.071120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.071328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.071372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.071495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.071521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.071678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.071703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.071886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.071934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.072291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.072370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.072510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.072700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.072725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.072879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.072905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.073056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.073083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.073247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.073278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.073540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.073580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.073771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.073798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.073955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.074002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.074210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.074254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.074438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.074465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.074640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.074665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.074865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.074915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.075210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.075265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.075458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.075484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.075683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.075708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.075948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.075999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.076194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.076238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.076402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.076429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.076620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.076669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.076862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.076924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.860 [2024-07-12 13:41:40.077136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.860 [2024-07-12 13:41:40.077183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.860 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.077366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.077397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.077636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.077661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.077867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.077912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.078194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.078251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.078434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.078479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.078656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.078699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.078906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.078958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.079132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.079157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.079332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.079357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.079533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.079558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.079710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.079751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.079995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.080049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.080210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.080235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.080398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.080445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.080606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.080649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.080832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.080880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.081168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.081218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.081397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.081442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.081654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.081696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.081949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.082005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.082146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.082173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.082402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.082433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.082626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.082669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.082899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.082945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.083105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.083131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.083287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.083313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.083494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.083536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.083792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.083847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.861 [2024-07-12 13:41:40.084136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.861 [2024-07-12 13:41:40.084183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.861 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.084392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.084437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.084595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.084641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.084849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.084900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.085121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.085166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.085362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.085394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.085587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.085630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.085899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.085950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.086113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.086139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.086296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.086327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.086503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.086547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.086829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.086897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.087113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.087164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.087424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.087469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.087669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.087715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.087909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.087954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.088112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.088138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.088272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.088297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.088457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.088641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.088682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.088920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.088971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.089126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.089152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.089271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.089297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.089514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.089559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.089713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.089757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.089921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.089950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.090101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.090127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.090285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.090331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.090497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.090523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.090687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.090714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.090862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.090888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.091183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.091237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.091467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.091518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.091705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.091768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.862 qpair failed and we were unable to recover it. 00:33:42.862 [2024-07-12 13:41:40.091994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.862 [2024-07-12 13:41:40.092046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.092183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.092208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.092417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.092474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.092713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.093056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.093109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.093288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.093320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.093458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.093486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.093759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.094040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.094091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.094244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.094269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.094424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.094452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.094711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.094763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.095068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.095131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.095287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.095331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.095497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.095523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.095750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.096093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.096144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.096323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.096487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.096513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.096802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.096856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.097115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.097162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.097356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.097383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.097562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.097589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.097885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.097945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.098203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.098252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.098420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.098450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.098705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.098958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.099005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.099162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.099187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.099347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.863 [2024-07-12 13:41:40.099374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.863 qpair failed and we were unable to recover it. 00:33:42.863 [2024-07-12 13:41:40.099632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.099691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.099933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.099980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.100158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.100184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.100388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.100443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.100687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.100743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.101014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.101062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.101251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.101277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.101448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.101475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.101698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.101755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.101934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.101988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.102149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.102175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.102349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.102394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.102638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.102689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.102910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.102947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.103103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.103130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.103313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.103344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.103537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.103589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.103887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.103941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.104230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.104284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.104454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.104481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.104705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.104768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.104946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.104974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.105165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.105196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.105352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.105380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.105608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.105660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.105919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.105968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.864 qpair failed and we were unable to recover it. 00:33:42.864 [2024-07-12 13:41:40.106121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.864 [2024-07-12 13:41:40.106147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.106320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.106346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.106490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.106516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.106789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.106838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.107082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.107132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.107321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.107348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.107513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.107539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.107809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.107880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.108166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.108219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.108387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.108414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.108658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.108710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.108933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.108978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.109165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.109209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.109379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.109405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.109657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.109710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.109949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.109998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.110189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.110215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.110375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.110401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.110635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.110685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.110933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.110983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.111163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.111188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.111335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.111361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.111575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.111627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.111885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.111937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.112117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.112143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.112327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.112354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.112512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.112539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.112828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.112888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.113177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.113238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.865 [2024-07-12 13:41:40.113404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.865 [2024-07-12 13:41:40.113430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.865 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.113688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.113740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.114029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.114235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.114260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.114416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.114442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.114730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.114789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.115078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.115130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.115296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.115518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.115547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.115839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.115899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.116132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.116177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.116422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.116448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.116653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.116707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.116954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.117004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.117163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.117189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.117350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.117377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.117664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.117715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.118012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.118069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.118219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.118245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.118405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.118432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.118694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.118742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.118944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.118989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.119150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.119178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.119437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.119497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.119725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.119777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.120055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.120110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.120249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.120274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.120534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.120584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.120884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.120937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.121151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.866 [2024-07-12 13:41:40.121206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.866 qpair failed and we were unable to recover it. 00:33:42.866 [2024-07-12 13:41:40.121393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.121447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.121678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.121725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.122019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.122074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.122237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.122263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.122435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.122462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.122725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.122776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.123065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.123115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.123278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.123304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.123448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.123476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.123769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.123819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.124022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.124067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.124249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.124275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.124438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.124464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.124690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.124723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.124950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.124994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.125148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.125174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.125401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.125429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.125718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.125776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.126061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.126111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.126295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.126327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.126494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.126519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.126779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.126829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.127136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.127186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.127377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.127403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.127591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.127634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.127907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.127969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.128308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.128366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.128528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.128554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.128833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.128890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.129169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.129222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.129389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.129415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.129683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.129736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.130075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.130119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.130277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.130302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.130501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.130527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.130692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.867 [2024-07-12 13:41:40.130738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.867 qpair failed and we were unable to recover it. 00:33:42.867 [2024-07-12 13:41:40.131026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.131083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.131242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.131268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.131430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.131457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.131653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.131701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.131880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.131926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.132089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.132134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.132292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.132324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.132455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.132483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.132762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.132816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.133104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.133155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.133320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.133351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.133512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.133537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.133834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.133894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.134235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.134280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.134416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.134442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.134720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.134774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.135064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.135114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.135294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.135326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.135508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.135534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.135768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.135803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.136133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.136266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.136292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.136504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.136546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.136737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.136765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.136927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.137218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.137283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.137525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.137552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.137693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.137721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.138034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.138394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.138421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.868 qpair failed and we were unable to recover it. 00:33:42.868 [2024-07-12 13:41:40.138578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.868 [2024-07-12 13:41:40.138604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.138766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.138792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.138926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.138996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.139244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.139295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.139479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.139508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.139692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.139719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.139877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.139904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.140118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.140197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.140486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.140513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.140675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.140701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.140950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.140986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.141182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.141219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.141492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.141519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.141704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.141730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.141886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.141913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.142068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.142096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.142313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.142379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.142535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.142562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.142703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.142729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.142951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.143022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.143344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.143392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.143559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.143585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.143739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.143765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.143928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.143955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.144109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.144136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.144325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.144352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.144483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.144510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.144700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.869 [2024-07-12 13:41:40.144726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.869 qpair failed and we were unable to recover it. 00:33:42.869 [2024-07-12 13:41:40.144927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.144977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.145231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.145281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.145499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.145526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.145692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.145718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.145856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.145882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.146038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.146064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.146328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.146365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.146568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.146604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.146854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.146905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.147188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.147239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.147534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.147586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.147868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.147919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.148179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.148232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.148525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.148579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.148861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.148915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.149231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.149281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.149607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.149659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.149925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.149977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.150259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.150309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.150610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.150670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.151010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.151047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.151386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.151466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.151820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.151884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.152246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.152302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.152621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.152676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.152975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.153029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.153374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.153432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.153785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.153842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.154147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.870 [2024-07-12 13:41:40.154202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.870 qpair failed and we were unable to recover it. 00:33:42.870 [2024-07-12 13:41:40.154509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.154564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.154871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.155236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.155293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.155641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.155697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.156040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.156095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.156363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.156421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.156694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.156749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.157063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.157118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.157433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.157489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.157789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.157843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.158174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.158228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.158589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.158645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.158913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.158967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.159266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.159333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.159646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.159714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.160022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.160077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.160420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.160477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.160786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.160841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.161157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.161211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.161562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.161619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.161920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.161974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.162270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.162336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.162689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.162752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.163028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.163088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.163418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.163479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.163797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.163876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.164229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.164288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.164596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.164655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.871 [2024-07-12 13:41:40.164986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.871 [2024-07-12 13:41:40.165043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.871 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.165368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.165428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.165791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.165859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.166192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.166251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.166599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.166662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.167018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.167077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.167430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.167496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.167875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.167938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.168289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.168359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.168684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.168742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.169033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.169095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.169427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.169489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.169819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.169879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.170216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.170277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.170627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.170686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.171045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.171104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.171476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.171536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.171882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.171944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.172305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.172395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.172770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.172837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.173214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.173272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.173602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.173662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.173988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.174049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.174387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.174449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.174856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.174921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.175225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.175291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.175660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.175724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.176105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.176170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.176522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.176589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.176954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.177022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.177422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.177486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.177842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.177906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.178288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.178722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.178788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.179135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.179199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.872 qpair failed and we were unable to recover it. 00:33:42.872 [2024-07-12 13:41:40.179639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.872 [2024-07-12 13:41:40.179707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.180095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.180158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.180528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.180593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.180942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.181005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.181394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.181459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.181805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.181869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.182231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.182294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.182664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.182738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.183134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.183197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.183509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.183575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.183928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.183991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.184379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.184442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.184829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.184893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.185284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.185362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.185669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.185733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.186055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.186118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.186464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.186528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.186885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.186949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.187300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.187378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.187729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.187795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.188199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.188265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.188649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.188714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.189060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.189123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.189504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.189571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.189954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.190018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.190399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.190465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.190814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.190877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.191262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.191339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.191695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.191758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.192106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.192174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.192541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.192609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.193068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.873 qpair failed and we were unable to recover it. 00:33:42.873 [2024-07-12 13:41:40.193425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.873 [2024-07-12 13:41:40.193490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.193872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.193934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.194251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.194333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.194685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.194750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.195091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.195154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.195552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.195618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.195966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.196031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.196362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.196427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.196772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.196839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.197193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.197256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.197679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.197753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.198060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.198126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.198511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.198576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.198930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.198994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.199355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.199821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.199905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.200267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.200357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.200753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.200817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.201163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.201229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.201617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.201683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.202037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.202101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.202460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.202526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.202913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.202978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.203367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.203434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.203801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.203865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.204217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.204281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.204655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.204720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.205107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.205170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.874 qpair failed and we were unable to recover it. 00:33:42.874 [2024-07-12 13:41:40.205529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.874 [2024-07-12 13:41:40.205594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.205912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.205979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.206370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.206436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.206820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.206883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.207266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.207342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.207674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.207738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.208100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.208163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.208552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.208617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.208972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.209039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.209405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.209473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.209854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.209917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.210257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.210349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.210716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.210781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.211172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.211235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.211680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.211748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.212116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.212180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.212543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.212608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.213010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.213073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.213430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.213497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.213888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.213952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.214339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.214404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.214710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.215165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.215229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.215603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.215667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.215996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.216060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.216413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.875 [2024-07-12 13:41:40.216478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.875 qpair failed and we were unable to recover it. 00:33:42.875 [2024-07-12 13:41:40.216817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.216882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.217181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.217258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.217602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.217667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.218055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.218490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.218557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.218908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.218972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.219373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.219438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.219759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.219826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.220184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.220249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.220673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.220739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.221050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.221118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.221511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.221576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.221953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.222016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.222359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.222425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.222804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.222866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.223196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.223260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.223601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.223666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.224054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.224117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.224434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.224502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.224879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.224943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.225295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.225373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.225727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.225792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.226151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.226214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.226569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.226990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.227377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.227443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.227815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.227879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.228231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.228294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.228736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.228825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.229222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.229289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.229648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.229716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.876 [2024-07-12 13:41:40.230113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.876 [2024-07-12 13:41:40.230178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.876 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.230537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.230603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.230930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.230997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.231310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.231389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.231729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.231794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.232116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.232180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.232571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.232637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.232988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.233052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.233424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.233489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.233855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.234327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.234405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.234759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.234823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.235154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.235218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.235585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.235651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.236007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.236072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.236410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.236475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.236785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.236852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.237255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.237335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.237660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.237723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.238040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.238106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.238503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.238571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.238897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.238964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.239293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.239376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.239711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.239775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.240127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.240192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.240588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.240654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.240972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.241039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.241351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.241439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.877 [2024-07-12 13:41:40.241821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.877 [2024-07-12 13:41:40.241886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.877 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.242285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.242365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.242676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.242745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.243065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.243129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.243503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.243568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.243925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.243991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.244300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.244379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.244737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.244800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.245158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.245221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.245753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.245847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.246246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.246333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.246737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.246802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.247175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.247245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.247614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.247679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.248007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.248070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.248406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.248477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.248849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.248916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.249271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.249355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.249685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.249752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.250108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.250173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.250492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.250557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.250887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.250953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.251343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.251406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.251755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.251819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.252148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.252211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.252620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.252685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.253077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.253141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.253499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.253563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.253941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.254004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.254357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.254422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.254772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.254835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.255216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.255280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.255645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.255709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.256104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.256170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.256563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.256636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.256955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.257019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.257365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.257444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.257840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.257905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.258249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.258312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.258720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.878 [2024-07-12 13:41:40.258784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.878 qpair failed and we were unable to recover it. 00:33:42.878 [2024-07-12 13:41:40.259132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.259196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.259545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.259609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.259988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.260052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.260443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.260507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.260885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.260948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.261295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.261391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.261762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.261826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.262149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.262212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.262563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.262626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.262989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.263055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.263437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.263504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.263857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.263919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.264269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.264356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.264715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.264780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.265135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.265199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.265572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.265644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.265958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.266019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.266385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.266455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.266822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.266886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.267259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.267339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.267684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.267747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.268105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.268169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.268530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.268595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.268956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.269018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.269413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.269477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.269836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.269899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.270256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.270333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.270660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.270727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.271092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.271160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.271488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.271554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.271949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.272012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.272369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.272440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.272769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.272832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.273170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.273236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.273618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.273684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.274082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.274144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.274497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.274561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.274924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.274990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.275374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.275439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.275827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.275889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.879 qpair failed and we were unable to recover it. 00:33:42.879 [2024-07-12 13:41:40.276270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.879 [2024-07-12 13:41:40.276362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.276687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.276751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.277133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.277195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.277579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.277653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.277987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.278056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.278410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.278474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.278814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.278877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.279218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.279646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.279709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.280034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.280102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.280487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.280551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.280914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.280977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.281358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.281423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.281776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.281844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.282164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.282230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.282604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.282668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.283015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.283078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.283461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.283900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.283962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.284342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.284406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.284787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.284849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.285245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.285726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.285789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.286152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.286215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.286579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.286653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.286984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.287054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.287399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.287464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.287813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.287875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.288268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.288352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.288683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.288749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.289101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.289164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.289499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.289564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.289959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.290024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.880 [2024-07-12 13:41:40.290427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.880 [2024-07-12 13:41:40.290492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.880 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.290807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.290872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.291268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.291349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.291694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.291759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.292077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.292140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.292538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.292603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.292983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.293045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.293371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.293435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.293749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.293815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.294194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.294257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.294634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.294708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.295016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.295078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.295437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.295500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.295843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.295915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.296238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.296662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.296727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.297083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.297146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.297528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.297595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.297946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.298009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.298362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.298426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.298778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.298840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.299203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.299273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.299607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.300012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.300077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.300482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.300546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.300900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.301367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.301439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.301762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.301830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.302161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.302224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.302577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.302641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.302976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.303039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.303371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.303440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.303801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.303874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.304249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.304313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.304688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.304758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.305114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.305181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.305782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.305856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.306274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.306361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.306714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.306781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.307163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.307226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.307603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.881 [2024-07-12 13:41:40.307670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.881 qpair failed and we were unable to recover it. 00:33:42.881 [2024-07-12 13:41:40.308226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.882 [2024-07-12 13:41:40.308299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.882 qpair failed and we were unable to recover it. 00:33:42.882 [2024-07-12 13:41:40.308700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.882 [2024-07-12 13:41:40.308764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.882 qpair failed and we were unable to recover it. 00:33:42.882 [2024-07-12 13:41:40.309159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.882 [2024-07-12 13:41:40.309227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.882 qpair failed and we were unable to recover it. 00:33:42.882 [2024-07-12 13:41:40.309636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.882 [2024-07-12 13:41:40.309702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.882 qpair failed and we were unable to recover it. 00:33:42.882 [2024-07-12 13:41:40.310065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.882 [2024-07-12 13:41:40.310128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:42.882 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.311537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.311611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.312007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.312073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.312414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.312478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.312830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.312895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.313247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.313726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.313790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.314144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.314214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.314571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.314639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.315001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.315075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.315404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.315479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.315874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.315937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.316284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.316369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.316706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.316778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.317140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.317214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.317594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.317664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.318025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.318095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.318489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.318554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.318871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.318937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.319293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.319378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.319703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.319766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.320096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.320158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.320476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.320540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.320945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.321011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.321433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.321771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.321833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.322158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.322222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.322568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.322632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.322990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.323054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.323423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.323486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.323926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.323999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.324310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.324392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.153 qpair failed and we were unable to recover it. 00:33:43.153 [2024-07-12 13:41:40.324761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.153 [2024-07-12 13:41:40.324823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.325205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.325267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.325615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.325683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.326039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.326102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.326484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.326548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.326901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.326968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.327284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.327365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.327707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.327769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.328115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.328180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.328597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.328663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.328977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.329041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.329418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.329839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.329902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.330299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.330382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.330769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.330832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.331208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.331271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.331669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.331735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.332094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.332160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.332551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.332615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.333015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.333084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.333475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.333539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.333886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.333948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.334328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.334392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.334774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.335232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.335294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.335708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.335772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.336122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.336190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.336571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.336650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.337013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.337076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.337422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.337486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.337889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.337954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.338337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.338401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.338733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.338794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.339181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.339251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.339622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.339686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.340026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.154 [2024-07-12 13:41:40.340091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.154 qpair failed and we were unable to recover it. 00:33:43.154 [2024-07-12 13:41:40.340459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.340522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.340922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.340987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.341355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.341422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.341778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.341840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.342191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.342253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.342632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.342703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.343053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.343116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.343461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.343524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.343898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.343962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.344388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.344452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.344804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.344867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.345220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.345282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.345715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.345781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.346126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.346189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.346584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.346658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.347047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.347111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.347443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.347504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.347811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.347874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.348214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.348276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.348655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.348720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.349089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.349153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.349502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.349565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.349948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.350011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.350400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.350464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.350844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.350908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.351230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.351296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.351697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.351760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.352145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.352207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.352575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.352645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.353028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.353092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.353437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.353502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.353831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.353894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.354248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.354329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.354699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.354762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.355092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.355157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.355545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.355610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.355962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.356025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.356382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.356452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.356799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.356862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.357216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.155 [2024-07-12 13:41:40.357279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.155 qpair failed and we were unable to recover it. 00:33:43.155 [2024-07-12 13:41:40.357599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.358039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.358103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.358467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.358901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.358963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.359281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.359364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.359747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.359810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.360125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.360187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.360557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.360620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.360978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.361041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.361385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.361452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.361872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.361938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.362346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.362413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.362802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.362866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.363179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.363244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.363601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.363666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.364012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.364092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.364439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.364504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.364864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.364927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.365226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.365289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.365645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.365712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.366098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.366161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.366526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.366589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.366937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.366999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.367349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.367419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.367746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.367809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.368187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.368249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.368627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.368691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.369049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.369113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.369515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.369580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.369981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.370045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.370399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.370461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.370816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.370884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.371192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.371255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.371616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.372034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.372097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.372441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.372506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.156 [2024-07-12 13:41:40.372861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.156 [2024-07-12 13:41:40.372923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.156 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.373262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.373342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.373730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.373794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.374174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.374237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.374601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.374664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.375063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.375130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.375497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.375571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.375925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.376345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.376408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.376775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.376839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.377200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.377263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.377718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.378290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.378412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.378789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.378857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.379200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.379265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.379642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.379739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.380085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.380156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.380520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.380588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.380970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.381037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.381434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.381501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.381895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.381961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.382308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.382392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.382755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.382819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.383179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.383625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.383690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.384046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.384110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.384459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.384524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.384884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.384947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.385285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.385370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.385736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.157 [2024-07-12 13:41:40.385800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.157 qpair failed and we were unable to recover it. 00:33:43.157 [2024-07-12 13:41:40.386189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.386253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.386574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.386641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.387001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.387069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.387429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.387506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.387846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.387911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.388292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.388370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.388731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.388798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.389156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.389223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.389607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.389672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.390029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.390095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.390460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.390525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.390917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.390981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.391344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.391410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.391734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.391798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.392186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.392249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.392585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.392653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.393013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.393457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.393526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.393909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.393973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.394346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.394411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.394760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.394823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.395165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.395232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.395629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.395694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.396084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.396147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.396505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.396570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.396961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.397024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.397350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.397415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.397772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.397836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.398232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.398295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.398625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.398692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.399070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.399134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.399538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.399603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.399984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.400048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.400437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.400501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.400821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.400884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.401242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.401306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.401717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.401781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.402104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.402165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.158 [2024-07-12 13:41:40.402556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.158 [2024-07-12 13:41:40.402621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.158 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.403014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.403429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.403494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.403877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.403940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.404300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.404379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.404693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.404771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.405133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.405198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.405543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.405608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.405973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.406040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.406395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.406461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.406810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.406875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.407264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.407340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.407696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.407760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.408141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.408204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.408556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.408621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.408924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.409382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.409448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.409833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.409896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.410243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.410306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.410701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.410765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.411162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.411225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.411591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.411657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.411976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.412041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.412349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.412416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.412771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.412839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.413189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.413253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.413672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.413737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.414086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.414153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.414488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.414554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.414909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.414974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.415337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.415403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.415761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.415824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.416227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.416291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.416703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.416770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.417165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.417229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.417622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.417687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.159 qpair failed and we were unable to recover it. 00:33:43.159 [2024-07-12 13:41:40.418072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.159 [2024-07-12 13:41:40.418136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.418517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.418581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.418963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.419027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.419422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.419487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.419805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.419871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.420240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.420305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.420665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.420732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.421080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.421144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.421503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.421568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.421881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.421957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.422358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.422424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.422786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.422851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.423243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.423306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.423705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.423769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.424136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.424199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.424555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.424622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.424987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.425052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.425400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.425468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.425788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.425854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.426240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.426302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.426647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.426712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.427078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.427141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.427498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.427565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.427970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.428034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.428416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.428480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.428865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.428929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.429281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.429366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.429767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.430197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.430262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.430674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.430740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.431095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.431159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.431552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.432002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.432065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.432408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.432476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.432866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.432929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.433285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.433382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.433763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.433828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.434216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.434281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.434695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.434760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.160 [2024-07-12 13:41:40.435126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.160 [2024-07-12 13:41:40.435190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.160 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.435545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.435611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.435999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.436062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.436448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.436513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.436897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.437312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.437392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.437746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.438190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.438253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.438622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.438687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.439065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.439128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.439526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.439601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.439983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.440047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.440365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.440433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.440823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.440887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.441268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.441353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.441710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.441777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.442124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.442191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.442557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.442623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.442969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.443032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.443386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.443451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.443832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.443895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.444273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.444349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.444748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.444813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.445133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.445196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.445611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.445677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.446044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.446107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.446476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.446542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.446939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.447002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.447309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.447394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.447775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.447840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.448172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.448238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.448615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.448681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.449045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.449478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.161 [2024-07-12 13:41:40.449568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.161 qpair failed and we were unable to recover it. 00:33:43.161 [2024-07-12 13:41:40.449917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.449981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.450333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.450398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.450786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.450849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.451222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.451287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.451700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.451765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.452112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.452176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.452584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.452957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.453379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.453444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.453789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.453853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.454237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.454301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.454715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.454780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.455132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.455198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.455528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.455593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.455988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.456052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.456445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.456510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.456902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.456975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.457379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.457444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.457828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.457892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.458282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.458357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.458680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.459129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.459193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.459578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.459642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.459984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.460050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.460417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.162 [2024-07-12 13:41:40.460486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.162 qpair failed and we were unable to recover it. 00:33:43.162 [2024-07-12 13:41:40.460834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.460898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.461284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.461376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.461727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.461791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.462169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.462232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.462548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.462974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.463038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.463388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.463453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.463804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.463868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.464223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.464290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.464659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.464723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.465028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.465095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.465491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.465557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.465870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.465933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.466299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.466376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.466723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.466786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.467132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.467197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.467539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.467622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.468027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.468091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.468497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.468563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.468951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.469016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.469424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.469489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.469846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.470287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.470364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.470762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.470827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.471184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.471249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.471615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.471679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.472039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.472103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.472453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.472521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.472865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.472929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.473226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.473293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.473693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.473758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.474057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.474130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.474485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.474552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.474938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.475001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.475326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.163 [2024-07-12 13:41:40.475395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.163 qpair failed and we were unable to recover it. 00:33:43.163 [2024-07-12 13:41:40.475711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.475778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.476095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.476160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.476522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.476587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.476932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.476996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.477352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.477417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.477809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.477875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.478224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.478288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.478674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.478738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.479050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.479113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.479471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.479538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.479942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.480006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.480363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.480427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.480816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.480880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.481242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.481305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.481646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.481709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.482060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.482126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.482492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.482557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.482931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.482995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.483351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.483419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.483785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.483849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.484195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.484259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.484622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.484689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.485020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.485085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.485446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.485512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.485894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.485958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.486263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.486653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.486716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.487112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.487175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.487546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.487611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.487952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.488015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.488409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.488473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.488817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.488885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.489254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.489334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.489734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.489800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.490212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.490276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.490679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.491089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.491166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.491511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.491575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.491952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.492015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.492387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.164 [2024-07-12 13:41:40.492453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.164 qpair failed and we were unable to recover it. 00:33:43.164 [2024-07-12 13:41:40.492801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.492864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.493244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.493308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.493680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.493747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.494149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.494213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.494579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.494644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.494991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.495055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.495429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.495494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.495849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.495913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.496336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.496400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.496818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.496885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.497251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.497330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.497707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.497770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.498097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.498161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.498524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.498590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.498981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.499045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.499396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.499461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.499811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.499879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.500234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.500298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.500643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.500708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.501105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.501169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.501565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.501630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.501990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.502056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.502421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.502486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.502844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.502908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.503330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.503395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.503695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.503758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.504111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.504175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.504471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.504536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.504853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.504919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.505263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.505347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.505705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.506148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.506211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.506572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.506637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.507025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.507089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.507444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.507508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.507843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.507909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.508298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.508385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.508778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.508841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.509240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.509304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.509704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.165 [2024-07-12 13:41:40.509769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.165 qpair failed and we were unable to recover it. 00:33:43.165 [2024-07-12 13:41:40.510109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.510173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.510568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.510632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.511014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.511077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.511458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.511522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.511871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.511936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.512234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.512297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.512652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.512719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.513065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.513131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.513497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.513562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.513925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.513989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.514302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.514392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.514751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.514815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.515200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.515264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.515593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.515660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.516048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.516112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.516477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.516543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.516903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.516967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.517333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.517397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.517728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.517791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.518135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.518200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.518559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.518627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.518977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.519045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.519420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.519487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.519816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.519883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.520243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.520308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.520677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.520740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.521067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.521131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.521475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.521540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.521913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.521976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.522350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.522416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.522765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.522833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.523176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.523239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.523622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.523687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.524038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.524101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.524420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.524487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.524848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.524912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.525281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.525368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.525757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.525820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.526144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.166 [2024-07-12 13:41:40.526534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.166 [2024-07-12 13:41:40.526599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.166 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.526985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.527048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.527373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.527439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.527754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.527816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.528205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.528573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.528639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.528970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.529034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.529401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.529468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.529813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.529877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.530230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.530707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.530772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.531178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.531242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.531674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.531742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.532105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.532168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.532523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.532588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.532971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.533034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.533429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.533493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.533813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.533877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.534193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.534260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.534675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.534741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.535039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.535105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.535459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.535527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.535922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.535986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.536307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.536398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.536765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.536830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.537253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.537633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.537698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.538002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.538066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.538455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.538880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.538944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.539290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.539367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.539707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.539771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.167 qpair failed and we were unable to recover it. 00:33:43.167 [2024-07-12 13:41:40.540127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.167 [2024-07-12 13:41:40.540195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.540585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.540650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.540993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.541057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.541440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.541509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.541826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.541895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.542248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.542329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.542730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.542795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.543149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.543211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.543609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.543675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.544027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.544091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.544450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.544515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.544899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.544962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.545299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.545377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.545702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.545766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.546110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.546174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.546495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.546564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.546915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.546979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.547364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.547437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.547783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.547847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.548209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.548276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.548697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.548763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.549164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.549227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.549610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.549678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.550079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.550143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.550505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.550571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.550927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.550991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.551361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.551426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.551774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.551837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.552232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.552296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.552660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.552724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.553034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.553101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.553494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.553579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.553933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.554008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.554364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.554429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.554782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.555160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.555226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.168 [2024-07-12 13:41:40.555596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.168 [2024-07-12 13:41:40.555661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.168 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.556047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.556111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.556431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.556499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.556888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.556952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.557343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.557408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.557765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.557829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.558182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.558246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.558582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.558651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.559043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.559421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.559491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.559854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.559921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.560282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.560361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.560725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.560792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.561147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.561211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.561549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.561615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.561967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.562031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.562384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.562449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.562805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.562870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.563228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.563656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.563720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.564116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.564180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.564536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.564604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.565000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.565065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.565437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.565502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.565840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.565903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.566260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.566352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.566754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.566819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.567198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.567262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.567604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.567672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.568015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.568081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.568470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.568536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.568884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.568948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.569342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.569406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.569767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.570217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.570281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.570609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.570677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.571067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.571143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.571535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.169 [2024-07-12 13:41:40.571599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.169 qpair failed and we were unable to recover it. 00:33:43.169 [2024-07-12 13:41:40.571935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.571999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.572344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.572411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.572760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.572823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.573169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.573232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.573608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.573675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.574061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.574125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.574483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.574548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.574893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.574959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.575298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.575381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.575760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.575824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.576125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.576192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.576543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.576982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.577046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.577389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.577454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.577850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.577914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.578257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.578334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.578694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.578756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.579135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.579198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.579654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.580041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.580104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.580404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.580469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.580819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.580883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.581241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.581303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.581680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.581744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.582098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.582161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.582506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.582571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.582917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.582980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.583349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.583414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.583733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.583796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.584174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.584237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.584651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.584718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.585120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.585183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.585531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.585597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.585949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.586015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.586402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.586468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.586832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.586897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.170 [2024-07-12 13:41:40.587281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.170 [2024-07-12 13:41:40.587357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.170 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.587743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.587807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.588189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.588263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.588625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.588689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.589049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.589113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.589502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.589567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.589955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.590018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.590414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.590479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.590825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.590891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.591294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.591372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.591722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.591787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.592171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.592234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.592605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.592670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.593028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.593092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.593435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.593500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.593843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.593906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.594259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.594339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.594726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.594789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.595139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.595202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.595568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.595633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.595981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.596447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.596512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.596860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.596926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.597247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.597311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.597765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.597832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.598214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.598276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.598643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.598707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.599010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.599074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.599462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.599526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.599927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.600347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.600412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.600735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.600797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.601147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.601210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.601654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.602122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.602185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.602544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.171 [2024-07-12 13:41:40.602612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.171 qpair failed and we were unable to recover it. 00:33:43.171 [2024-07-12 13:41:40.603007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.603071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.603425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.603489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.603880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.603943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.604340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.604405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.604706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.604771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.605176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.605240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.605581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.605664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.606025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.606089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.606492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.606558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.606939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.607003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.607387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.607450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.607805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.607869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.608191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.608254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.608653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.608717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.609106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.609171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.609930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.610004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.610391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.610457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.610847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.610914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.611264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.611344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.611694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.611761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.612270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.612359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.612734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.612798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.613148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.613213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.613587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.613651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.172 [2024-07-12 13:41:40.614010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.172 [2024-07-12 13:41:40.614074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.172 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.615288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.615378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.615744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.615810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.616138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.616202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.616555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.616620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.616952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.617019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.617356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.617423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.617775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.617838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.618157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.618222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.618614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.618682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.619072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.619136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.619535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.619601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.619907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.619972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.620286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.620364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.620715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.620781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.621168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.446 [2024-07-12 13:41:40.621231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.446 qpair failed and we were unable to recover it. 00:33:43.446 [2024-07-12 13:41:40.621616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.621684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.622034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.622100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.622424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.622490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.622879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.622943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.623297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.623377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.623739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.623803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.624159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.624234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.624604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.624668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.625048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.625112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.625499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.625564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.625951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.626014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.626313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.626393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.626753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.626816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.627201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.627265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.627659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.627724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.628119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.628183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.628543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.628609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.629000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.629065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.629429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.629495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.629844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.629908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.630301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.630381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.630760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.630824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.631174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.631238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.631603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.631667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.632055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.632119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.632479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.632543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.632858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.632924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.633309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.633392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.633740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.633804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.634202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.634266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.634624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.634687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.635071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.635134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.635486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.635552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.635887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.635954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.636349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.636414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.636795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.636860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.637258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.637354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.637780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.637843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.638203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.638267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.447 [2024-07-12 13:41:40.638661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.447 [2024-07-12 13:41:40.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.447 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.639113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.639177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.639543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.639608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.640004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.640068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.640454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.640520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.640907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.640972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.641375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.641440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.641838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.641914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.642268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.642344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.642756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.643147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.643211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.643548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.643612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.643956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.644020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.644403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.644468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.644860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.644925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.645252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.645343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.645666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.645730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.646075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.646142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.646535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.646600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.646988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.647052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.647453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.647529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.647873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.647940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.648342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.648408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.648731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.648795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.649150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.649214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.649606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.649672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.650062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.650125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.650507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.650571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.650963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.651026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.651371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.651439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.651768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.651831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.652227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.652290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.652687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.652750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.653067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.653130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.653500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.653567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.653916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.653978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.654291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.654369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.654713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.654778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.655177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.655240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.655599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.655666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.448 [2024-07-12 13:41:40.656020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.448 [2024-07-12 13:41:40.656087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.448 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.656495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.656560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.656872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.656939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.657361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.657447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.657772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.657840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.658244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.658308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.658674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.658739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.659066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.659139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.659485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.659550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.659891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.659956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.660307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.660718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.660781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.661125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.661190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.661551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.661616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.661966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.662032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.662353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.662420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.662807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.662871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.663225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.663288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.663654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.663719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.664073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.664136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.664522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.664587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.664985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.665405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.665470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.665780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.665844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.666234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.666297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.666718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.666783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.667160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.667224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.667587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.667653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.667997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.668062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.668459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.668525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.668854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.668917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.669313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.669404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.669728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.669795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.670188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.670251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.670673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.670739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.671100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.671163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.671507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.671574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.671901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.671968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.672364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.672429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.672783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.672848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.449 [2024-07-12 13:41:40.673161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.449 [2024-07-12 13:41:40.673223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.449 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.673580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.673645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.674004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.674067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.674423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.674491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.674805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.674868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.675250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.675328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.675712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.675776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.676165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.676241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.676637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.676701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.677006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.677073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.677422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.677487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.677835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.677898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.678250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.678693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.678757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.679111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.679176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.679514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.679578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.679967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.680031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.680389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.680455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.680811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.680877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.681234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.681299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.681713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.681779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.682189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.682252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.682617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.682682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.683071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.683135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.683539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.683604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.683961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.684025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.684352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.684419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.684807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.684871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.685224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.685291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.685684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.685748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.686100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.686166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.686542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.686608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.686960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.687026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.687410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.687476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.450 [2024-07-12 13:41:40.687834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.450 [2024-07-12 13:41:40.687901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.450 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.688256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.688332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.688704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.688768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.689120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.689183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.689586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.689650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.690010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.690074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.690398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.690462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.690851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.690915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.691332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.691397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.691745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.691808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.692167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.692231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.692633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.692697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.693049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.693115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.693479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.693562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.693934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.694386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.694450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.694835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.694899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.695292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.695371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.695701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.695764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.696159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.696223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.696547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.696613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.697003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.697067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.697421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.697486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.697842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.697904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.698237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.698300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.698734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.698801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.699128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.699191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.699603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.699670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.700056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.700121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.700480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.700551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.700948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.701396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.701461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.701821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.701883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.702200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.702263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.702605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.702669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.702992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.703055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.703406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.703474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.703860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.703924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-12 13:41:40.704284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.451 [2024-07-12 13:41:40.704359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.704725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.704789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.705154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.705216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.705594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.705658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.705981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.706047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.706447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.706858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.706922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.707306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.707754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.707818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.708174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.708238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.708618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.708683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.709036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.709102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.709492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.709558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.709909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.709972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.710342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.710406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.710894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.711006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.711449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.711520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.711842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.711907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.712304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.712385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.712745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.712809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.713196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.713588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.713653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.714015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.714079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.714437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.714501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.714873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.714941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.715343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.715409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.715765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.715831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.716191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.716254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.716700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.717061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.717125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.717486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.717554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.717873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.717940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3731819 Killed "${NVMF_APP[@]}" "$@" 00:33:43.452 [2024-07-12 13:41:40.718290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.718369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.718753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.718818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:43.452 [2024-07-12 13:41:40.719217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:43.452 [2024-07-12 13:41:40.719282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:43.452 [2024-07-12 13:41:40.719736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.719803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:43.452 [2024-07-12 13:41:40.720166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.452 [2024-07-12 13:41:40.720231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.720641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.720706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-12 13:41:40.721096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.452 [2024-07-12 13:41:40.721160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.721526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.721592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.721925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.721993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.722370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.722437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.722752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.722819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.723207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.723272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.723648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.723717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.724020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.724055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.724258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.724296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.724544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.724579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.724971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.725034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.725394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.725432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3732370 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3732370 00:33:43.453 [2024-07-12 13:41:40.725684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.725749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3732370 ']' 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.453 [2024-07-12 13:41:40.726142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.453 [2024-07-12 13:41:40.726207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 13:41:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.453 [2024-07-12 13:41:40.726485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.726520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.726837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.726901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.727287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.727588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.727656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.728021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.728086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.728416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.728451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.728676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.728740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.729091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.729155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.729467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.729505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.729815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.729881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.730255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.730334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.730574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.730608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.730917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.730953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.731160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.731226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.731536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.731572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.731928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.731992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.732351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.732405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.732607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.732643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.732819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.732854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.733035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.733412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.733447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.733620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.733655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-12 13:41:40.733976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.453 [2024-07-12 13:41:40.734039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.734408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.734449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.734633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.734669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.734844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.734879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.735048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.735084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.735418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.735455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.735702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.735765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.736151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.736215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.736506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.736541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.736872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.736936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.737285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.737375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.737585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.737651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.738007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.738075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.738430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.738647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.738682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.739009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.739073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.739413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.739450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.739710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.739777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.740103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.740167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.740496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.740532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.740827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.740890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.741251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.741367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.741565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.741601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.741830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.741909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.742257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.742338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.742537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.742572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.742946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.743046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.743427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.743465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.743806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.743871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.744232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.744297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.744532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.744568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.744776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.744848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.745145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.745223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.745495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.745531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.745784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.745847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.746165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.746233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.746489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.454 [2024-07-12 13:41:40.746525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.454 qpair failed and we were unable to recover it. 00:33:43.454 [2024-07-12 13:41:40.746794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.746860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.747196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.747261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.747514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.747544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.747734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.747763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.747926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.748125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.748154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.748326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.748355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.748532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.748561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.748713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.748743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.748968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.749184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.749359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.749557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.749753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.749926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.749956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.750107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.750137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.750322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.750351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.750524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.750554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.750733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.750763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.750908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.750936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.455 [2024-07-12 13:41:40.751102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.455 [2024-07-12 13:41:40.751129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.455 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.751337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.751403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.751554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.751581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.751814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.751881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.752237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.752300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.752524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.752552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.752809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.752843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.753071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.753151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.753407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.753434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.753573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.753599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.753734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.753759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.754011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.754074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.754394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.754422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.754551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.754704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.754730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.754930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.754994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.755397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.755424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.755555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.755581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.755929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.755992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.756378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.756404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.756561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.756586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.756902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.756936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.757329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.757405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.757560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.757605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.757925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.758003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.758237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.758271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.758445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.758471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.758599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.456 [2024-07-12 13:41:40.758625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.456 qpair failed and we were unable to recover it. 00:33:43.456 [2024-07-12 13:41:40.758751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.758776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.758928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.758972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.759218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.759283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.759547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.759572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.759825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.759889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.760273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.760350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.760540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.760566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.760790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.760856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.761248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.761312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.761568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.761610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.762052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.762124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.762516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.762556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.762796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.762834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.763190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.763264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.763685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.763758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.764061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.764099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.764350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.764389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.764733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.764805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.765108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.765147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.765509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.765584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.766032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.766104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.766392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.766432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.766804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.766877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.767295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.767395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.767726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.767767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.768056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.768097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.768312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.457 [2024-07-12 13:41:40.768412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.457 qpair failed and we were unable to recover it. 00:33:43.457 [2024-07-12 13:41:40.768743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.768785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.769168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.769243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.769722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.769797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.769943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:43.458 [2024-07-12 13:41:40.770033] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.458 [2024-07-12 13:41:40.770154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.770207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.770577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.770666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.771059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.771102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.771354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.771398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.771831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.771911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.772367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.772441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.772816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.772862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.773277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.773375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.773780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.773854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.774184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.774224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.774457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.774499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.774841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.774914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.775262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.775302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.775593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.775642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.775949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.776022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.776381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.776421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.776695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.776770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.777201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.777275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.777671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.777727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.778147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.778218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.778671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.778745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.779141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.779213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.779624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.779699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.458 qpair failed and we were unable to recover it. 00:33:43.458 [2024-07-12 13:41:40.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.458 [2024-07-12 13:41:40.780133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.780391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.780443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.780881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.780953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.781401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.781485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.781879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.781933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.782358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.782431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.782868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.782941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.783338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.783414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.783848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.783920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.784338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.784393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.784798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.784856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.785156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.785213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.785590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.785664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.786070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.786151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.786587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.786660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.787043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.787082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.787363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.787423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.787861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.787934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.788339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.788413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.788813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.788874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.789312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.789419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.789863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.789938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.790398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.790471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.790854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.790916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.791342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.791415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.791809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.791890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.792334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.792408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.459 [2024-07-12 13:41:40.792809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.459 [2024-07-12 13:41:40.792874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.459 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.793328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.793402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.793816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.793890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.794295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.794383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.794781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.794867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.795273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.795361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.795772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.795844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.796247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.796327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.796771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.796856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.797258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.797357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.797893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.797997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.798410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.798454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.798726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.798805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.799256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.799343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.799792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.799867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.800261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.800300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.800617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.800691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.801135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.801209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.801691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.801767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.802172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.802245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.802663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.802735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.803142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.803217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.803689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.803763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.804175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.804249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.804700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.804774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.805167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.805240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.460 [2024-07-12 13:41:40.805658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.805733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.806181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.460 [2024-07-12 13:41:40.806254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.460 qpair failed and we were unable to recover it. 00:33:43.460 [2024-07-12 13:41:40.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.806771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.807228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.807299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.807678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.807717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.807976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.808049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.808416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.808446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.808670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.808699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.808874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.808903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.809083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.809113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.809287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.809468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:43.461 [2024-07-12 13:41:40.809487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.809518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.809665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.809694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.809899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.809929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.810081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.810110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.810307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.810341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.810511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.810540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.810716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.810745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.810914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.810943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.811159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.811199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.811358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.811385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.811514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.811541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.811680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.811705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.811918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.811943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.812098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.812124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.812279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.812304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.812452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.461 [2024-07-12 13:41:40.812478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.461 qpair failed and we were unable to recover it. 00:33:43.461 [2024-07-12 13:41:40.812610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.812637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.812816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.812842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.812981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.813125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.813310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.813500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.813681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.813828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.813853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.814910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.814935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.462 [2024-07-12 13:41:40.815893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.462 [2024-07-12 13:41:40.815920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.462 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.816953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.816978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.817100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.817127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.817285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.817311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.817472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.817497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.817646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.817671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.817802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.817827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.818900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.818925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.819114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.819139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.819291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.819320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.819501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.819526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.819660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.819685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.819859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.820064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.820241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.820431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.820610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.463 [2024-07-12 13:41:40.820764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.463 qpair failed and we were unable to recover it. 00:33:43.463 [2024-07-12 13:41:40.820918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.820942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.821138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.821328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.821508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.821713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.821863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.821991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.822167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.822345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.822597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.822757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.822964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.822990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.823122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.823148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.823278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.823305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.823491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.823521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.823681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.823706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.823848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.823873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.824823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.824847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.825031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.825056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.825206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.825233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.825411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.825437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.825600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.464 [2024-07-12 13:41:40.825625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.464 qpair failed and we were unable to recover it. 00:33:43.464 [2024-07-12 13:41:40.825782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.825807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.825969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.825994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.826146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.826172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.826330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.826535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.826560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.826722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.826748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.826926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.826952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.827956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.827981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.828103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.828128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.828284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.828326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.828480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.828505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.828637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.828662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.828838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.828863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.829873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.829898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.830032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.830058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.830189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.830214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.830362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.830387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.465 qpair failed and we were unable to recover it. 00:33:43.465 [2024-07-12 13:41:40.830515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.465 [2024-07-12 13:41:40.830547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.830699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.830724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.830879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.830904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.831086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.831263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.831438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.831614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.831816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.831995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.832154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.832311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.832522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.832678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.832859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.832883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.833903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.833929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.834079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.834279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.834463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.834666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.834832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.834983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.835009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.835191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.835216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.835388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.466 [2024-07-12 13:41:40.835414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.466 qpair failed and we were unable to recover it. 00:33:43.466 [2024-07-12 13:41:40.835612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.835638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.835815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.835840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.835962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.835989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.836972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.836997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.837153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.837178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.837304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.837337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.837489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.837514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.837668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.837697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.837863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.837888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.838952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.838977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.839127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.839152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.839327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.839343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.467 [2024-07-12 13:41:40.839352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.839504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.839530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.839691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.839715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.839864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.839889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.840019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.840044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.840207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.467 [2024-07-12 13:41:40.840233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.467 qpair failed and we were unable to recover it. 00:33:43.467 [2024-07-12 13:41:40.840391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.840417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.840544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.840568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.840715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.840740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.840920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.840945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.841093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.841118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.841267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.841292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.841449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.841475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.841634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.841659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.841861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.841887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.842932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.842957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.843086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.843112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.843292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.843333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.843516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.843541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.843740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.843765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.843883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.843907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.844085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.468 [2024-07-12 13:41:40.844110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.468 qpair failed and we were unable to recover it. 00:33:43.468 [2024-07-12 13:41:40.844310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.844344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.844495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.844521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.844680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.844705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.844860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.844884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.845944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.845970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.846918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.846945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.847122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.847276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.847456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.847660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.847835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.847990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.848151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.848307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.848502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.848704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.848854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.848879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.469 qpair failed and we were unable to recover it. 00:33:43.469 [2024-07-12 13:41:40.849063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.469 [2024-07-12 13:41:40.849089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.849222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.849247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.849403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.849430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.849624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.849834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.849859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.849988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.850174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.850361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.850613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.850791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.850972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.850998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.851132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.851159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.851342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.851369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.851526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.851551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.851702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.851728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.851888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.851913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.852902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.852928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.853081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.853106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.853273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.853300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.853463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.853489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.853641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.853666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.853821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.853846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.854023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.470 [2024-07-12 13:41:40.854047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.470 qpair failed and we were unable to recover it. 00:33:43.470 [2024-07-12 13:41:40.854206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.854231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.854361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.854387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.854564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.854589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.854745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.854771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.854921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.854947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.855913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.855939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.856295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.856478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.856824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.856982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.857131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.857321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.857501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.857659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.857820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.857845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.471 qpair failed and we were unable to recover it. 00:33:43.471 [2024-07-12 13:41:40.858930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.471 [2024-07-12 13:41:40.858955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.859962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.859988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.860115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.860140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.860349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.860375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.860526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.860551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.860707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.860731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.860886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.860911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.861942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.861967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.862906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.862931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.863083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.863108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.863257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.863282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.863461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.863488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.472 [2024-07-12 13:41:40.863612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.472 [2024-07-12 13:41:40.863637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.472 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.863816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.863842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.864863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.864985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.865161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.865312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.865540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.865726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.865877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.865903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.866849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.866874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.867881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.867907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.868053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.868079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.868255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.473 [2024-07-12 13:41:40.868281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.473 qpair failed and we were unable to recover it. 00:33:43.473 [2024-07-12 13:41:40.868425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.868452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.868621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.868647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.868826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.868851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.868999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.869201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.869381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.869561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.869710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.869887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.869912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.870074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.870101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.870281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.870307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.870460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.870485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.870621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.870647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.870799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.870826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.871911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.871938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.872092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.872117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.872297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.872328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.872455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.474 [2024-07-12 13:41:40.872480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.474 qpair failed and we were unable to recover it. 00:33:43.474 [2024-07-12 13:41:40.872610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.872637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.872763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.872789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.872940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.872968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.873096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.873121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.873261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.873290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.873477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.873503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.873656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.873681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.873866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.873892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.874956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.874981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.875965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.875991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.876142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.876167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.876312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.876342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.876465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.876492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.876663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.876688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.876832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.876857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.475 qpair failed and we were unable to recover it. 00:33:43.475 [2024-07-12 13:41:40.877013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.475 [2024-07-12 13:41:40.877038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.877162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.877188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.877342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.877368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.877536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.877562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.877686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.877711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.877865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.877892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.878939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.879090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.879115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.879268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.879293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.879500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.879537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.879696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.879727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.879923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.879953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.880122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.880148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.880293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.880324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.880449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.880480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.880662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.880687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.880842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.880867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.476 [2024-07-12 13:41:40.881951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.476 [2024-07-12 13:41:40.881977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.476 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.882124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.882149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.882268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.882292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.882449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.882474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.882607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.882632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.882811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.882836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.883017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.883042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.883270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.883295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.883469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.883501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.883653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.883692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.883893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.883922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.884094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.884124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.884328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.884358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.884535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.884565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.884734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.884763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.884909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.884940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.885109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.885139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.885325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.885364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.885508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.885536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.885693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.885719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.885948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.885974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.886127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.886152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.886306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.886338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.886491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.886517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.886672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.886697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.886852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.886877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.887001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.887027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.887170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.477 [2024-07-12 13:41:40.887196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.477 qpair failed and we were unable to recover it. 00:33:43.477 [2024-07-12 13:41:40.887351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.887378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.887508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.887534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.887687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.887712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.887866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.887892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.888051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.888230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.888623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.888829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.888982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.889854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.889986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.890137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.890291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.890475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.890655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.890841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.890867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.891871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.891896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.478 qpair failed and we were unable to recover it. 00:33:43.478 [2024-07-12 13:41:40.892096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.478 [2024-07-12 13:41:40.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.892245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.892271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.892443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.892470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.892611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.892639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.892843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.892876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.893023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.893052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.893257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.893466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.893496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.893652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.893681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.893834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.893864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.894964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.894990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.895123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.895148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.895348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.895378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.895530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.895557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.895735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.895760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.895918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.895943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.896889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.896916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.479 [2024-07-12 13:41:40.897045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.479 [2024-07-12 13:41:40.897070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.479 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.897222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.897247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.897431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.897458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.897614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.897639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.897796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.897974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.897999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.898971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.898996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.899120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.899145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.899334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.899360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.899490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.899517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.899700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.899726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.899857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.899882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.900916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.900941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.901092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.901118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.901267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.901294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.901460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.901493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.901645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.901675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.901876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.480 [2024-07-12 13:41:40.901905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.480 qpair failed and we were unable to recover it. 00:33:43.480 [2024-07-12 13:41:40.902048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.902208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.902372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.902561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.902715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.902893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.902918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.903096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.903276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.903301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.903435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.903460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.903616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.481 [2024-07-12 13:41:40.903641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.481 qpair failed and we were unable to recover it. 00:33:43.481 [2024-07-12 13:41:40.903768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.903794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.903931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.903957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.904909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.904935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.905918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.905943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.906960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.752 [2024-07-12 13:41:40.906988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.752 qpair failed and we were unable to recover it. 00:33:43.752 [2024-07-12 13:41:40.907121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.907276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.907465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.907648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.907795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.907946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.907971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.908898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.908924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.909965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.909992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.910964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.910989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.911143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.911169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.911344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.911371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.911514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.753 [2024-07-12 13:41:40.911554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.753 qpair failed and we were unable to recover it. 00:33:43.753 [2024-07-12 13:41:40.911724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.911751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.911878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.911904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.912966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.912993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.913144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.913294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.913514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.913675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.913837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.913992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.914168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.914349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.914502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.914684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.914860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.914885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.915899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.915924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.916060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.916087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.916214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.916239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.754 [2024-07-12 13:41:40.916414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.754 [2024-07-12 13:41:40.916439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.754 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.916597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.916624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.916801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.916827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.916983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.917963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.917989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.918143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.918170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.918334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.918373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.918536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.918564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.918701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.918729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.918856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.918882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.919912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.919938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.920091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.920449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.920609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.920791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.920976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.921174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.921371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.921530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.921741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.921890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.921915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.922920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.755 [2024-07-12 13:41:40.922947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.755 qpair failed and we were unable to recover it. 00:33:43.755 [2024-07-12 13:41:40.923100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.923258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.923420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.923602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.923762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.923944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.923969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.924932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.924958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.925949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.925975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.926946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.926972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.927957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.927983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.928911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.928950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.929098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.929126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.929250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.929277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.756 [2024-07-12 13:41:40.929406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.756 [2024-07-12 13:41:40.929432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.756 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.929557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.929582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.929705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.929731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.929820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.757 [2024-07-12 13:41:40.929855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.757 [2024-07-12 13:41:40.929859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.929870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.757 [2024-07-12 13:41:40.929882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.757 [2024-07-12 13:41:40.929884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 [2024-07-12 13:41:40.929892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.929952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:43.757 [2024-07-12 13:41:40.930106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.930130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 [2024-07-12 13:41:40.930039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.930096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:43.757 [2024-07-12 13:41:40.930098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:43.757 [2024-07-12 13:41:40.930372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.930399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.930542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.930568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.930697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.930723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.930875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.930901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.931850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.931875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.932900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.932925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.933897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.933926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.934049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.934075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.757 [2024-07-12 13:41:40.934214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.757 [2024-07-12 13:41:40.934239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.757 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.934372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.934398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.934549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.934576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.934726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.934752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.934904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.934929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.935868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.935893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.936924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.936951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.937106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.937256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.937409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.937586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.937752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.937978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.938125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.938359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.938513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.938674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.938861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.938888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.939948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.939974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.940102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.940128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.940260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.940285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.940413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.940440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.940573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.940605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.758 [2024-07-12 13:41:40.940742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.758 [2024-07-12 13:41:40.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.758 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.940976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.941133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.941286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.941478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.941712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.941895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.941921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.942845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.942871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.943960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.943984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.944165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.944327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.944519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.944668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.944843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.944994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.945199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.945370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.945526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.945702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.945857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.945882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.946939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.946964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.947154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.947182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.947367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.947393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.947513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.759 [2024-07-12 13:41:40.947668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.759 [2024-07-12 13:41:40.947693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.759 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.947852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.947877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.948859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.949900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.949925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.950901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.950926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.951892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.951917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.952922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.952947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.953921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.953947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.954087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.954114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.760 qpair failed and we were unable to recover it. 00:33:43.760 [2024-07-12 13:41:40.954272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.760 [2024-07-12 13:41:40.954299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.954434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.954461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.954600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.954630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.954801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.954839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.954974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.955136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.955322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.955488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.955639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.955838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.955862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.956269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.956298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.956451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.956477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.956613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.956639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.956774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.956799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.956957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.956982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.957160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.957377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.957537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.957693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.957869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.957987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.958830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.958981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.959159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.959311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.959492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.959712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.959873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.959898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.960843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.960868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.961003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.961029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.961182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.961209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.761 [2024-07-12 13:41:40.961376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.761 [2024-07-12 13:41:40.961402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.761 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.961533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.961558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.961692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.961860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.961900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.962888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.962913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.963845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.963981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.964826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.964983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.965142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.965375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.965578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.965731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.965873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.965899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.762 qpair failed and we were unable to recover it. 00:33:43.762 [2024-07-12 13:41:40.966839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.762 [2024-07-12 13:41:40.966864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.966996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.967934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.967959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.968118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.968143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.968312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.968496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.968529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.968662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.968690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.968846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.968872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.969924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.969950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.970112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.970137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.970252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.970278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.970463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.970489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.970692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.970840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.970865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.971876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.971997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.972172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.972327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.972490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.972643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.972802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.972827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.973018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.973043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.973184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.973211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.973376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.973403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.763 qpair failed and we were unable to recover it. 00:33:43.763 [2024-07-12 13:41:40.973597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.763 [2024-07-12 13:41:40.973622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.973762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.973787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.974929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.974955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.975106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.975132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.975262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.975288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.975473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.975498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.975628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.975658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.975849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.975874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.976901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.976927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.977054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.977079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.977234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.977259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.977474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.977500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.977643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.977669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.977880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.977905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.978925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.978950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.979900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.979926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.980085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.980111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.980260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.980285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.980425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.980451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.980608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.764 [2024-07-12 13:41:40.980633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.764 qpair failed and we were unable to recover it. 00:33:43.764 [2024-07-12 13:41:40.980753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.980779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.980932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.980958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.981934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.981960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.982898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.982924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.983923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.983949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.984169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.984195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.984319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.984345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.984466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.984491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.984624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.984650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.984860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.984885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.985857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.985882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.986865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.986991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.987017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.987173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.987199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.765 qpair failed and we were unable to recover it. 00:33:43.765 [2024-07-12 13:41:40.987342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.765 [2024-07-12 13:41:40.987369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.987494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.987521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.987647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.987672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.987799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.987825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.987964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.987990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.988945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.988972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.989915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.989941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.990877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.990904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.991968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.991993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.992132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.992157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.992289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.992320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.992474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.992499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.992651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.992676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.766 qpair failed and we were unable to recover it. 00:33:43.766 [2024-07-12 13:41:40.992797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.766 [2024-07-12 13:41:40.992823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.992962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.992987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.993972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.993998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.994969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.994995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.995937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.995963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.996904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.996929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.997850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.997988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.998947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.998972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.999093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.999118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.767 [2024-07-12 13:41:40.999241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.767 [2024-07-12 13:41:40.999266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.767 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:40.999393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:40.999420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:40.999542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:40.999568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:40.999699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:40.999724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:40.999851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:40.999876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.000929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.000954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.001867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.001893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.002826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.002852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.003033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.003184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.003363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.003616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.003770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.003996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.004829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.004986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.005931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.768 [2024-07-12 13:41:41.005957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.768 qpair failed and we were unable to recover it. 00:33:43.768 [2024-07-12 13:41:41.006104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.006253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.006501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.006656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.006809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.006968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.006998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.007126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.007152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.007279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.007304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.007525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.007550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.007682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.007707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.007835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.007860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.008967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.008997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.009968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.009993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.010970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.010996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.011150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.011175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.011335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.011361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.011493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.011519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.011675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.011701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.011853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.011878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.012003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.012028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.012179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.012204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.012336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.012491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.012516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.769 [2024-07-12 13:41:41.012638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.769 [2024-07-12 13:41:41.012664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.769 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.012838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.012864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.012999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.013158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.013329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.013508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.013690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.013857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.013884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.014848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.014873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.015853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.015997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.016837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.016989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.017964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.017991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.018109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.018135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.018270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.018295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.770 qpair failed and we were unable to recover it. 00:33:43.770 [2024-07-12 13:41:41.018433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.770 [2024-07-12 13:41:41.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.018584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.018610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.018733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.018759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.018887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.018914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.019860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.019885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.020889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.020915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.021872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.021999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.022171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.022330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.022495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.022674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.022854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.022880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.023961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.023987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.771 [2024-07-12 13:41:41.024952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.771 [2024-07-12 13:41:41.024978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.771 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.025129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.025154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.025306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.025353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.025513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.025540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.025691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.025717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.025865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.025890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.026942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.026969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.027884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.027909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.028093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.028274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.028439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.028811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.028974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.029963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.029989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.030952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.030979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.031325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.031483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.031628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.772 [2024-07-12 13:41:41.031776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.772 qpair failed and we were unable to recover it. 00:33:43.772 [2024-07-12 13:41:41.031931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.031959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3910000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.032970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.032996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.033933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.033959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.034850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.034875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.035862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.035887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.036844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.036993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.037176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.037344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.037506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.037684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.037831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.037856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.038018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.038044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 [2024-07-12 13:41:41.038207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.038233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.773 A controller has encountered a failure and is being reset. 00:33:43.773 [2024-07-12 13:41:41.038431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.773 [2024-07-12 13:41:41.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.773 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.038608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.038636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.038791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.038818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.038949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.038974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.039970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.039996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3918000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.040181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.040219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05450 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.040362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.040390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.040546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.040572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.040716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.040741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.040866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.040892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.041051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.041077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3920000b90 with addr=10.0.0.2, port=4420 00:33:43.774 qpair failed and we were unable to recover it. 00:33:43.774 [2024-07-12 13:41:41.041240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.774 [2024-07-12 13:41:41.041279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf13480 with addr=10.0.0.2, port=4420 00:33:43.774 [2024-07-12 13:41:41.041298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13480 is same with the state(5) to be set 00:33:43.774 [2024-07-12 13:41:41.041330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf13480 (9): Bad file descriptor 00:33:43.774 [2024-07-12 13:41:41.041362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.774 [2024-07-12 13:41:41.041376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.774 [2024-07-12 13:41:41.041392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.774 Unable to reset the controller. 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 Malloc0 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 [2024-07-12 13:41:41.121409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 [2024-07-12 13:41:41.149641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.774 13:41:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3731957 00:33:45.146 Controller properly reset. 00:33:50.403 Initializing NVMe Controllers 00:33:50.403 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:50.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:50.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:50.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:50.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:50.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:50.403 Initialization complete. Launching workers. 00:33:50.403 Starting thread on core 1 00:33:50.403 Starting thread on core 2 00:33:50.403 Starting thread on core 3 00:33:50.403 Starting thread on core 0 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:50.403 00:33:50.403 real 0m10.685s 00:33:50.403 user 0m32.004s 00:33:50.403 sys 0m8.076s 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.403 ************************************ 00:33:50.403 END TEST nvmf_target_disconnect_tc2 00:33:50.403 ************************************ 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:50.403 13:41:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:50.403 rmmod nvme_tcp 00:33:50.403 rmmod nvme_fabrics 00:33:50.403 rmmod nvme_keyring 00:33:50.403 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:50.403 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3732370 ']' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3732370 ']' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3732370' 00:33:50.404 killing process with pid 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3732370 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.404 13:41:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.304 13:41:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.304 00:33:52.304 real 0m15.374s 00:33:52.304 user 0m57.376s 00:33:52.304 sys 0m10.439s 00:33:52.304 13:41:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.305 13:41:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 ************************************ 00:33:52.305 END TEST nvmf_target_disconnect 00:33:52.305 ************************************ 00:33:52.305 13:41:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:52.305 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:52.305 13:41:49 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:52.305 13:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:52.305 00:33:52.305 real 27m1.045s 00:33:52.305 user 73m22.217s 00:33:52.305 sys 6m35.495s 00:33:52.305 13:41:49 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.305 13:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 ************************************ 00:33:52.305 END TEST nvmf_tcp 00:33:52.305 ************************************ 00:33:52.305 13:41:49 -- common/autotest_common.sh@1142 -- # return 0 00:33:52.305 13:41:49 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:52.305 13:41:49 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:52.305 13:41:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:52.305 13:41:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.305 13:41:49 -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 ************************************ 00:33:52.305 START TEST spdkcli_nvmf_tcp 00:33:52.305 ************************************ 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:52.305 * Looking for test storage... 00:33:52.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3733566 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3733566 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3733566 ']' 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.305 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.305 [2024-07-12 13:41:49.546886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:33:52.305 [2024-07-12 13:41:49.546972] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733566 ] 00:33:52.305 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.305 [2024-07-12 13:41:49.577944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:52.305 [2024-07-12 13:41:49.602944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:52.305 [2024-07-12 13:41:49.687082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.305 [2024-07-12 13:41:49.687086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.563 13:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:52.563 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:52.563 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:52.563 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:52.563 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:52.563 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:52.563 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:52.563 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.563 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.563 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:52.563 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:52.563 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:52.563 ' 00:33:55.087 [2024-07-12 13:41:52.341199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.457 [2024-07-12 13:41:53.565473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:58.979 [2024-07-12 13:41:55.824403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:00.349 [2024-07-12 13:41:57.774439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:02.246 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:02.246 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:02.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:02.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:02.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:02.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:02.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:02.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:02.246 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:02.246 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:02.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:02.247 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:02.247 13:41:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.504 13:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:02.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:02.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:02.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:02.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:02.504 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:02.504 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:02.504 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:02.504 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:02.504 ' 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:07.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:07.760 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:07.760 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:07.760 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3733566 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3733566 ']' 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3733566 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3733566 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3733566' 00:34:07.760 killing process with pid 3733566 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3733566 00:34:07.760 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3733566 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3733566 ']' 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3733566 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3733566 ']' 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3733566 00:34:08.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3733566) - No such process 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3733566 is not found' 00:34:08.018 Process with pid 3733566 is not found 00:34:08.018 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:08.019 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:08.019 13:42:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:08.019 00:34:08.019 real 0m15.944s 00:34:08.019 user 0m33.724s 00:34:08.019 sys 0m0.801s 00:34:08.019 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:08.019 13:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.019 ************************************ 00:34:08.019 END TEST spdkcli_nvmf_tcp 00:34:08.019 ************************************ 00:34:08.019 13:42:05 -- common/autotest_common.sh@1142 -- # return 0 00:34:08.019 13:42:05 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:08.019 13:42:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:08.019 13:42:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:08.019 13:42:05 -- common/autotest_common.sh@10 -- # set +x 00:34:08.019 ************************************ 00:34:08.019 START TEST nvmf_identify_passthru 00:34:08.019 ************************************ 00:34:08.019 13:42:05 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:08.019 * Looking for test storage... 00:34:08.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.279 13:42:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:08.279 13:42:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:08.279 13:42:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.279 13:42:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.279 13:42:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:08.279 13:42:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:08.279 13:42:05 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:08.279 13:42:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:10.225 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:10.226 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:10.226 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:10.226 Found net devices under 0000:09:00.0: cvl_0_0 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:10.226 Found net devices under 0000:09:00.1: cvl_0_1 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:10.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:34:10.226 00:34:10.226 --- 10.0.0.2 ping statistics --- 00:34:10.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.226 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:34:10.226 00:34:10.226 --- 10.0.0.1 ping statistics --- 00:34:10.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.226 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:10.226 13:42:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:10.226 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.226 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:10.226 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:10.485 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:10.485 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:34:10.485 13:42:07 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:34:10.485 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:10.485 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:10.485 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:10.485 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:10.485 13:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:10.485 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.669 13:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:14.669 13:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:14.669 13:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:14.669 13:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:14.669 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.854 13:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:18.854 13:42:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:18.854 13:42:15 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:18.854 13:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3738677 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3738677 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3738677 ']' 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.854 [2024-07-12 13:42:16.057170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:34:18.854 [2024-07-12 13:42:16.057266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.854 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.854 [2024-07-12 13:42:16.097143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:18.854 [2024-07-12 13:42:16.123288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.854 [2024-07-12 13:42:16.212611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.854 [2024-07-12 13:42:16.212665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.854 [2024-07-12 13:42:16.212692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.854 [2024-07-12 13:42:16.212704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.854 [2024-07-12 13:42:16.212713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.854 [2024-07-12 13:42:16.212796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.854 [2024-07-12 13:42:16.212829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.854 [2024-07-12 13:42:16.212889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.854 [2024-07-12 13:42:16.212891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:18.854 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.854 INFO: Log level set to 20 00:34:18.854 INFO: Requests: 00:34:18.854 { 00:34:18.854 "jsonrpc": "2.0", 00:34:18.854 "method": "nvmf_set_config", 00:34:18.854 "id": 1, 00:34:18.854 "params": { 00:34:18.854 "admin_cmd_passthru": { 00:34:18.854 "identify_ctrlr": true 00:34:18.854 } 00:34:18.854 } 00:34:18.854 } 00:34:18.854 00:34:18.854 INFO: response: 00:34:18.854 { 00:34:18.854 "jsonrpc": "2.0", 00:34:18.854 "id": 1, 00:34:18.854 "result": true 00:34:18.854 } 00:34:18.854 00:34:18.854 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.855 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:18.855 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.855 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.855 INFO: Setting log level to 20 00:34:18.855 INFO: Setting log level to 20 00:34:18.855 INFO: Log level set to 20 00:34:18.855 INFO: Log level set to 20 00:34:18.855 INFO: Requests: 00:34:18.855 { 00:34:18.855 "jsonrpc": "2.0", 00:34:18.855 "method": "framework_start_init", 00:34:18.855 "id": 1 00:34:18.855 } 00:34:18.855 00:34:18.855 INFO: Requests: 00:34:18.855 { 00:34:18.855 "jsonrpc": "2.0", 00:34:18.855 "method": "framework_start_init", 00:34:18.855 "id": 1 00:34:18.855 } 00:34:18.855 00:34:19.113 [2024-07-12 13:42:16.377668] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:19.113 INFO: response: 00:34:19.113 { 00:34:19.113 "jsonrpc": "2.0", 00:34:19.113 "id": 1, 00:34:19.113 "result": true 00:34:19.113 } 00:34:19.113 00:34:19.113 INFO: response: 00:34:19.113 { 00:34:19.113 "jsonrpc": "2.0", 00:34:19.113 "id": 1, 00:34:19.113 "result": true 00:34:19.113 } 00:34:19.113 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.113 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.113 INFO: Setting log level to 40 00:34:19.113 INFO: Setting log level to 40 00:34:19.113 INFO: Setting log level to 40 00:34:19.113 [2024-07-12 13:42:16.387770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.113 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.113 13:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.113 13:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 Nvme0n1 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 [2024-07-12 13:42:19.277629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 [ 00:34:22.393 { 00:34:22.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:22.393 "subtype": "Discovery", 00:34:22.393 "listen_addresses": [], 00:34:22.393 "allow_any_host": true, 00:34:22.393 "hosts": [] 00:34:22.393 }, 00:34:22.393 { 00:34:22.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.393 "subtype": "NVMe", 00:34:22.393 "listen_addresses": [ 00:34:22.393 { 00:34:22.393 "trtype": "TCP", 00:34:22.393 "adrfam": "IPv4", 00:34:22.393 "traddr": "10.0.0.2", 00:34:22.393 "trsvcid": "4420" 00:34:22.393 } 00:34:22.393 ], 00:34:22.393 "allow_any_host": true, 00:34:22.393 "hosts": [], 00:34:22.393 "serial_number": "SPDK00000000000001", 00:34:22.393 "model_number": "SPDK bdev Controller", 00:34:22.393 "max_namespaces": 1, 00:34:22.393 "min_cntlid": 1, 00:34:22.393 "max_cntlid": 65519, 00:34:22.393 "namespaces": [ 00:34:22.393 { 00:34:22.393 "nsid": 1, 00:34:22.393 "bdev_name": "Nvme0n1", 00:34:22.393 "name": "Nvme0n1", 00:34:22.393 "nguid": "AE2D52439D574C9E80F8890193566FD6", 00:34:22.393 "uuid": "ae2d5243-9d57-4c9e-80f8-890193566fd6" 00:34:22.393 } 00:34:22.393 ] 00:34:22.393 } 00:34:22.393 ] 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:22.393 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:22.393 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.393 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:22.393 13:42:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:22.393 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.393 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:22.393 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.394 rmmod nvme_tcp 00:34:22.394 rmmod nvme_fabrics 00:34:22.394 rmmod nvme_keyring 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3738677 ']' 00:34:22.394 13:42:19 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3738677 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3738677 ']' 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3738677 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738677 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738677' 00:34:22.394 killing process with pid 3738677 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3738677 00:34:22.394 13:42:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3738677 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.767 13:42:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.767 13:42:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:23.767 13:42:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.305 13:42:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.305 00:34:26.305 real 0m17.804s 00:34:26.305 user 0m26.233s 00:34:26.305 sys 0m2.313s 00:34:26.305 13:42:23 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:26.305 13:42:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 ************************************ 00:34:26.305 END TEST nvmf_identify_passthru 00:34:26.305 ************************************ 00:34:26.305 13:42:23 -- common/autotest_common.sh@1142 -- # return 0 00:34:26.305 13:42:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.305 13:42:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:26.305 13:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:26.305 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:34:26.305 ************************************ 00:34:26.305 START TEST nvmf_dif 00:34:26.305 ************************************ 00:34:26.305 13:42:23 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.305 * Looking for test storage... 00:34:26.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.305 13:42:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.305 13:42:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.305 13:42:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.305 13:42:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.305 13:42:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.305 13:42:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.305 13:42:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:26.305 13:42:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:26.305 13:42:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.305 13:42:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:26.305 13:42:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:26.305 13:42:23 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:26.305 13:42:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:28.215 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:28.215 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:28.215 Found net devices under 0000:09:00.0: cvl_0_0 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:28.215 Found net devices under 0000:09:00.1: cvl_0_1 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.215 13:42:25 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:28.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:34:28.216 00:34:28.216 --- 10.0.0.2 ping statistics --- 00:34:28.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.216 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:34:28.216 00:34:28.216 --- 10.0.0.1 ping statistics --- 00:34:28.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.216 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:28.216 13:42:25 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:29.606 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.606 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.606 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.606 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.606 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.606 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.606 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.606 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.606 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.606 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:29.606 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.606 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.606 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.606 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.606 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.606 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.606 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:29.606 13:42:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:29.606 13:42:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:29.606 13:42:26 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:29.606 13:42:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3741824 00:34:29.606 13:42:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3741824 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3741824 ']' 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:29.607 13:42:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.607 [2024-07-12 13:42:26.954568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:34:29.607 [2024-07-12 13:42:26.954666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.607 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.607 [2024-07-12 13:42:26.995898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:29.607 [2024-07-12 13:42:27.023238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.865 [2024-07-12 13:42:27.104634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.865 [2024-07-12 13:42:27.104680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.865 [2024-07-12 13:42:27.104708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.865 [2024-07-12 13:42:27.104719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.865 [2024-07-12 13:42:27.104728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.865 [2024-07-12 13:42:27.104753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:29.865 13:42:27 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 13:42:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.865 13:42:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:29.865 13:42:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 [2024-07-12 13:42:27.242121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.865 13:42:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 ************************************ 00:34:29.865 START TEST fio_dif_1_default 00:34:29.865 ************************************ 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 bdev_null0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.865 [2024-07-12 13:42:27.298415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.865 { 00:34:29.865 "params": { 00:34:29.865 "name": "Nvme$subsystem", 00:34:29.865 "trtype": "$TEST_TRANSPORT", 00:34:29.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.865 "adrfam": "ipv4", 00:34:29.865 "trsvcid": "$NVMF_PORT", 00:34:29.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.865 "hdgst": ${hdgst:-false}, 00:34:29.865 "ddgst": ${ddgst:-false} 00:34:29.865 }, 00:34:29.865 "method": "bdev_nvme_attach_controller" 00:34:29.865 } 00:34:29.865 EOF 00:34:29.865 )") 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.865 13:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.866 "params": { 00:34:29.866 "name": "Nvme0", 00:34:29.866 "trtype": "tcp", 00:34:29.866 "traddr": "10.0.0.2", 00:34:29.866 "adrfam": "ipv4", 00:34:29.866 "trsvcid": "4420", 00:34:29.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.866 "hdgst": false, 00:34:29.866 "ddgst": false 00:34:29.866 }, 00:34:29.866 "method": "bdev_nvme_attach_controller" 00:34:29.866 }' 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.866 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:30.124 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:30.124 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:30.124 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:30.124 13:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.124 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:30.124 fio-3.35 00:34:30.124 Starting 1 thread 00:34:30.124 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.315 00:34:42.315 filename0: (groupid=0, jobs=1): err= 0: pid=3742053: Fri Jul 12 13:42:38 2024 00:34:42.315 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10040msec) 00:34:42.315 slat (nsec): min=5058, max=62431, avg=9739.27, stdev=3685.50 00:34:42.315 clat (usec): min=40888, max=47370, avg=41283.86, stdev=612.34 00:34:42.315 lat (usec): min=40896, max=47386, avg=41293.60, stdev=612.74 00:34:42.315 clat percentiles (usec): 00:34:42.315 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:42.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:42.315 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:42.315 | 99.00th=[42730], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:34:42.315 | 99.99th=[47449] 00:34:42.315 bw ( KiB/s): min= 352, max= 416, per=99.94%, avg=387.20, stdev=14.31, samples=20 00:34:42.315 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:34:42.315 lat (msec) : 50=100.00% 00:34:42.315 cpu : usr=89.61%, sys=10.12%, ctx=18, majf=0, minf=311 00:34:42.315 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.315 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.315 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:42.315 00:34:42.315 Run status group 0 (all jobs): 00:34:42.315 READ: bw=387KiB/s (397kB/s), 387KiB/s-387KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10040-10040msec 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 00:34:42.315 real 0m11.036s 00:34:42.315 user 0m10.106s 00:34:42.315 sys 0m1.292s 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 ************************************ 00:34:42.315 END TEST fio_dif_1_default 00:34:42.315 ************************************ 00:34:42.315 13:42:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:42.315 13:42:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:42.315 13:42:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:42.315 13:42:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 ************************************ 00:34:42.315 START TEST fio_dif_1_multi_subsystems 00:34:42.315 ************************************ 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 bdev_null0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 [2024-07-12 13:42:38.382209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.315 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.315 bdev_null1 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:42.316 { 00:34:42.316 "params": { 00:34:42.316 "name": "Nvme$subsystem", 00:34:42.316 "trtype": "$TEST_TRANSPORT", 00:34:42.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.316 "adrfam": "ipv4", 00:34:42.316 "trsvcid": "$NVMF_PORT", 00:34:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.316 "hdgst": ${hdgst:-false}, 00:34:42.316 "ddgst": ${ddgst:-false} 00:34:42.316 }, 00:34:42.316 "method": "bdev_nvme_attach_controller" 00:34:42.316 } 00:34:42.316 EOF 00:34:42.316 )") 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:42.316 { 00:34:42.316 "params": { 00:34:42.316 "name": "Nvme$subsystem", 00:34:42.316 "trtype": "$TEST_TRANSPORT", 00:34:42.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.316 "adrfam": "ipv4", 00:34:42.316 "trsvcid": "$NVMF_PORT", 00:34:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.316 "hdgst": ${hdgst:-false}, 00:34:42.316 "ddgst": ${ddgst:-false} 00:34:42.316 }, 00:34:42.316 "method": "bdev_nvme_attach_controller" 00:34:42.316 } 00:34:42.316 EOF 00:34:42.316 )") 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:42.316 "params": { 00:34:42.316 "name": "Nvme0", 00:34:42.316 "trtype": "tcp", 00:34:42.316 "traddr": "10.0.0.2", 00:34:42.316 "adrfam": "ipv4", 00:34:42.316 "trsvcid": "4420", 00:34:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.316 "hdgst": false, 00:34:42.316 "ddgst": false 00:34:42.316 }, 00:34:42.316 "method": "bdev_nvme_attach_controller" 00:34:42.316 },{ 00:34:42.316 "params": { 00:34:42.316 "name": "Nvme1", 00:34:42.316 "trtype": "tcp", 00:34:42.316 "traddr": "10.0.0.2", 00:34:42.316 "adrfam": "ipv4", 00:34:42.316 "trsvcid": "4420", 00:34:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:42.316 "hdgst": false, 00:34:42.316 "ddgst": false 00:34:42.316 }, 00:34:42.316 "method": "bdev_nvme_attach_controller" 00:34:42.316 }' 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:42.316 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:42.317 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:42.317 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:42.317 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:42.317 13:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.317 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.317 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.317 fio-3.35 00:34:42.317 Starting 2 threads 00:34:42.317 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.279 00:34:52.279 filename0: (groupid=0, jobs=1): err= 0: pid=3743446: Fri Jul 12 13:42:49 2024 00:34:52.279 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:34:52.279 slat (nsec): min=6986, max=61096, avg=8770.89, stdev=2970.44 00:34:52.279 clat (usec): min=40881, max=42402, avg=41011.65, stdev=194.69 00:34:52.279 lat (usec): min=40888, max=42433, avg=41020.42, stdev=195.26 00:34:52.279 clat percentiles (usec): 00:34:52.279 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:52.279 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:52.279 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:52.279 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:52.279 | 99.99th=[42206] 00:34:52.279 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:52.279 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:52.279 lat (msec) : 50=100.00% 00:34:52.279 cpu : usr=94.58%, sys=5.11%, ctx=18, majf=0, minf=157 00:34:52.279 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.279 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.279 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:52.279 filename1: (groupid=0, jobs=1): err= 0: pid=3743447: Fri Jul 12 13:42:49 2024 00:34:52.279 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:34:52.279 slat (nsec): min=6967, max=79029, avg=8870.98, stdev=3332.20 00:34:52.279 clat (usec): min=40888, max=42418, avg=41007.01, stdev=182.77 00:34:52.279 lat (usec): min=40895, max=42434, avg=41015.88, stdev=183.72 00:34:52.279 clat percentiles (usec): 00:34:52.279 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:52.279 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:52.279 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:52.279 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:52.279 | 99.99th=[42206] 00:34:52.279 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:52.279 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:52.279 lat (msec) : 50=100.00% 00:34:52.279 cpu : usr=95.11%, sys=4.58%, ctx=13, majf=0, minf=151 00:34:52.279 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.279 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.279 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:52.279 00:34:52.279 Run status group 0 (all jobs): 00:34:52.279 READ: bw=780KiB/s (798kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10013-10014msec 00:34:52.279 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:52.279 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:52.279 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.279 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:52.279 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 00:34:52.280 real 0m11.315s 00:34:52.280 user 0m20.372s 00:34:52.280 sys 0m1.251s 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 ************************************ 00:34:52.280 END TEST fio_dif_1_multi_subsystems 00:34:52.280 ************************************ 00:34:52.280 13:42:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:52.280 13:42:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:52.280 13:42:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:52.280 13:42:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 ************************************ 00:34:52.280 START TEST fio_dif_rand_params 00:34:52.280 ************************************ 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 bdev_null0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.280 [2024-07-12 13:42:49.739239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:52.280 { 00:34:52.280 "params": { 00:34:52.280 "name": "Nvme$subsystem", 00:34:52.280 "trtype": "$TEST_TRANSPORT", 00:34:52.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.280 "adrfam": "ipv4", 00:34:52.280 "trsvcid": "$NVMF_PORT", 00:34:52.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.280 "hdgst": ${hdgst:-false}, 00:34:52.280 "ddgst": ${ddgst:-false} 00:34:52.280 }, 00:34:52.280 "method": "bdev_nvme_attach_controller" 00:34:52.280 } 00:34:52.280 EOF 00:34:52.280 )") 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:52.280 13:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:52.280 "params": { 00:34:52.280 "name": "Nvme0", 00:34:52.280 "trtype": "tcp", 00:34:52.280 "traddr": "10.0.0.2", 00:34:52.280 "adrfam": "ipv4", 00:34:52.280 "trsvcid": "4420", 00:34:52.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:52.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:52.280 "hdgst": false, 00:34:52.280 "ddgst": false 00:34:52.280 }, 00:34:52.280 "method": "bdev_nvme_attach_controller" 00:34:52.280 }' 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:52.540 13:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.540 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:52.540 ... 00:34:52.540 fio-3.35 00:34:52.540 Starting 3 threads 00:34:52.798 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.357 00:34:59.357 filename0: (groupid=0, jobs=1): err= 0: pid=3744842: Fri Jul 12 13:42:55 2024 00:34:59.357 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(131MiB/5043msec) 00:34:59.357 slat (nsec): min=7310, max=38151, avg=11850.66, stdev=3188.46 00:34:59.357 clat (usec): min=4997, max=89841, avg=14364.38, stdev=13925.12 00:34:59.357 lat (usec): min=5008, max=89853, avg=14376.24, stdev=13925.12 00:34:59.357 clat percentiles (usec): 00:34:59.357 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 7242], 00:34:59.357 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10159], 00:34:59.357 | 70.00th=[11863], 80.00th=[13435], 90.00th=[47973], 95.00th=[51643], 00:34:59.357 | 99.00th=[53740], 99.50th=[54264], 99.90th=[88605], 99.95th=[89654], 00:34:59.357 | 99.99th=[89654] 00:34:59.357 bw ( KiB/s): min=16384, max=34560, per=35.60%, avg=26803.20, stdev=6284.67, samples=10 00:34:59.357 iops : min= 128, max= 270, avg=209.40, stdev=49.10, samples=10 00:34:59.357 lat (msec) : 10=58.72%, 20=29.65%, 50=4.58%, 100=7.05% 00:34:59.357 cpu : usr=92.60%, sys=6.96%, ctx=17, majf=0, minf=124 00:34:59.357 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:59.357 filename0: (groupid=0, jobs=1): err= 0: pid=3744843: Fri Jul 12 13:42:55 2024 00:34:59.357 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(112MiB/5026msec) 00:34:59.357 slat (usec): min=7, max=120, avg=12.92, stdev= 5.43 00:34:59.357 clat (usec): min=5182, max=93046, avg=16805.42, stdev=16287.62 00:34:59.357 lat (usec): min=5193, max=93060, avg=16818.35, stdev=16287.58 00:34:59.357 clat percentiles (usec): 00:34:59.357 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7963], 00:34:59.357 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[11076], 00:34:59.357 | 70.00th=[12125], 80.00th=[13173], 90.00th=[50070], 95.00th=[51643], 00:34:59.357 | 99.00th=[53740], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:34:59.357 | 99.99th=[92799] 00:34:59.357 bw ( KiB/s): min=17152, max=27648, per=30.37%, avg=22860.80, stdev=3584.10, samples=10 00:34:59.357 iops : min= 134, max= 216, avg=178.60, stdev=28.00, samples=10 00:34:59.357 lat (msec) : 10=51.90%, 20=30.80%, 50=6.92%, 100=10.38% 00:34:59.357 cpu : usr=92.62%, sys=6.83%, ctx=16, majf=0, minf=216 00:34:59.357 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:59.357 filename0: (groupid=0, jobs=1): err= 0: pid=3744844: Fri Jul 12 13:42:55 2024 00:34:59.357 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(128MiB/5013msec) 00:34:59.357 slat (nsec): min=7444, max=56833, avg=12340.24, stdev=3293.35 00:34:59.357 clat (usec): min=5265, max=94203, avg=14708.99, stdev=13978.66 00:34:59.357 lat (usec): min=5276, max=94216, avg=14721.33, stdev=13978.63 00:34:59.357 clat percentiles (usec): 00:34:59.357 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8160], 00:34:59.357 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10683], 00:34:59.357 | 70.00th=[11994], 80.00th=[13173], 90.00th=[48497], 95.00th=[51119], 00:34:59.357 | 99.00th=[54264], 99.50th=[55313], 99.90th=[89654], 99.95th=[93848], 00:34:59.357 | 99.99th=[93848] 00:34:59.357 bw ( KiB/s): min=17920, max=31488, per=34.62%, avg=26060.80, stdev=4259.49, samples=10 00:34:59.357 iops : min= 140, max= 246, avg=203.60, stdev=33.28, samples=10 00:34:59.357 lat (msec) : 10=52.50%, 20=35.85%, 50=5.39%, 100=6.27% 00:34:59.357 cpu : usr=92.16%, sys=7.22%, ctx=16, majf=0, minf=123 00:34:59.357 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.357 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:59.357 00:34:59.357 Run status group 0 (all jobs): 00:34:59.357 READ: bw=73.5MiB/s (77.1MB/s), 22.3MiB/s-26.0MiB/s (23.4MB/s-27.3MB/s), io=371MiB (389MB), run=5013-5043msec 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 bdev_null0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.357 [2024-07-12 13:42:55.772551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:59.357 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 bdev_null1 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 bdev_null2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:59.358 { 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme$subsystem", 00:34:59.358 "trtype": "$TEST_TRANSPORT", 00:34:59.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "$NVMF_PORT", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.358 "hdgst": ${hdgst:-false}, 00:34:59.358 "ddgst": ${ddgst:-false} 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 } 00:34:59.358 EOF 00:34:59.358 )") 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:59.358 { 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme$subsystem", 00:34:59.358 "trtype": "$TEST_TRANSPORT", 00:34:59.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "$NVMF_PORT", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.358 "hdgst": ${hdgst:-false}, 00:34:59.358 "ddgst": ${ddgst:-false} 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 } 00:34:59.358 EOF 00:34:59.358 )") 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:59.358 { 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme$subsystem", 00:34:59.358 "trtype": "$TEST_TRANSPORT", 00:34:59.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "$NVMF_PORT", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.358 "hdgst": ${hdgst:-false}, 00:34:59.358 "ddgst": ${ddgst:-false} 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 } 00:34:59.358 EOF 00:34:59.358 )") 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme0", 00:34:59.358 "trtype": "tcp", 00:34:59.358 "traddr": "10.0.0.2", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "4420", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.358 "hdgst": false, 00:34:59.358 "ddgst": false 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 },{ 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme1", 00:34:59.358 "trtype": "tcp", 00:34:59.358 "traddr": "10.0.0.2", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "4420", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:59.358 "hdgst": false, 00:34:59.358 "ddgst": false 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 },{ 00:34:59.358 "params": { 00:34:59.358 "name": "Nvme2", 00:34:59.358 "trtype": "tcp", 00:34:59.358 "traddr": "10.0.0.2", 00:34:59.358 "adrfam": "ipv4", 00:34:59.358 "trsvcid": "4420", 00:34:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:59.358 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:59.358 "hdgst": false, 00:34:59.358 "ddgst": false 00:34:59.358 }, 00:34:59.358 "method": "bdev_nvme_attach_controller" 00:34:59.358 }' 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.358 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:59.359 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:59.359 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:59.359 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:59.359 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.359 13:42:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.359 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:59.359 ... 00:34:59.359 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:59.359 ... 00:34:59.359 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:59.359 ... 00:34:59.359 fio-3.35 00:34:59.359 Starting 24 threads 00:34:59.359 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.591 00:35:11.591 filename0: (groupid=0, jobs=1): err= 0: pid=3745701: Fri Jul 12 13:43:06 2024 00:35:11.591 read: IOPS=448, BW=1795KiB/s (1838kB/s)(17.6MiB/10021msec) 00:35:11.591 slat (usec): min=4, max=107, avg=21.44, stdev=17.62 00:35:11.591 clat (msec): min=2, max=186, avg=35.47, stdev=10.43 00:35:11.591 lat (msec): min=2, max=186, avg=35.50, stdev=10.43 00:35:11.591 clat percentiles (msec): 00:35:11.591 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.591 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.591 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.591 | 99.00th=[ 45], 99.50th=[ 45], 99.90th=[ 186], 99.95th=[ 186], 00:35:11.591 | 99.99th=[ 186] 00:35:11.591 bw ( KiB/s): min= 1024, max= 2304, per=4.25%, avg=1791.80, stdev=269.14, samples=20 00:35:11.591 iops : min= 256, max= 576, avg=447.95, stdev=67.28, samples=20 00:35:11.591 lat (msec) : 4=0.36%, 10=1.02%, 20=0.40%, 50=97.86%, 250=0.36% 00:35:11.591 cpu : usr=98.10%, sys=1.44%, ctx=17, majf=0, minf=25 00:35:11.591 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:11.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.591 filename0: (groupid=0, jobs=1): err= 0: pid=3745702: Fri Jul 12 13:43:06 2024 00:35:11.591 read: IOPS=449, BW=1799KiB/s (1842kB/s)(17.6MiB/10010msec) 00:35:11.591 slat (nsec): min=4192, max=56749, avg=20314.78, stdev=9911.97 00:35:11.591 clat (msec): min=13, max=186, avg=35.40, stdev=10.17 00:35:11.591 lat (msec): min=13, max=186, avg=35.42, stdev=10.17 00:35:11.591 clat percentiles (msec): 00:35:11.591 | 1.00th=[ 19], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.591 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.591 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.591 | 99.00th=[ 45], 99.50th=[ 49], 99.90th=[ 186], 99.95th=[ 186], 00:35:11.591 | 99.99th=[ 188] 00:35:11.591 bw ( KiB/s): min= 1024, max= 2352, per=4.26%, avg=1794.40, stdev=273.42, samples=20 00:35:11.591 iops : min= 256, max= 588, avg=448.60, stdev=68.36, samples=20 00:35:11.591 lat (msec) : 20=1.69%, 50=97.96%, 250=0.36% 00:35:11.591 cpu : usr=95.65%, sys=2.48%, ctx=415, majf=0, minf=21 00:35:11.591 IO depths : 1=2.6%, 2=8.5%, 4=23.9%, 8=55.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:11.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 issued rwts: total=4502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.591 filename0: (groupid=0, jobs=1): err= 0: pid=3745703: Fri Jul 12 13:43:06 2024 00:35:11.591 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.591 slat (usec): min=12, max=121, avg=48.23, stdev=21.29 00:35:11.591 clat (msec): min=30, max=260, avg=36.09, stdev=13.98 00:35:11.591 lat (msec): min=30, max=260, avg=36.14, stdev=13.98 00:35:11.591 clat percentiles (msec): 00:35:11.591 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.591 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.591 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.591 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.591 | 99.99th=[ 259] 00:35:11.591 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.591 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.591 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.591 cpu : usr=92.55%, sys=3.81%, ctx=271, majf=0, minf=19 00:35:11.591 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.591 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.591 filename0: (groupid=0, jobs=1): err= 0: pid=3745704: Fri Jul 12 13:43:06 2024 00:35:11.591 read: IOPS=440, BW=1761KiB/s (1804kB/s)(17.4MiB/10102msec) 00:35:11.591 slat (usec): min=4, max=120, avg=38.28, stdev=27.52 00:35:11.591 clat (msec): min=18, max=248, avg=36.00, stdev=13.33 00:35:11.591 lat (msec): min=18, max=248, avg=36.04, stdev=13.32 00:35:11.591 clat percentiles (msec): 00:35:11.591 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.591 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.591 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.591 | 99.00th=[ 45], 99.50th=[ 47], 99.90th=[ 249], 99.95th=[ 249], 00:35:11.591 | 99.99th=[ 249] 00:35:11.591 bw ( KiB/s): min= 1024, max= 1923, per=4.21%, avg=1772.95, stdev=232.35, samples=20 00:35:11.591 iops : min= 256, max= 480, avg=443.20, stdev=58.06, samples=20 00:35:11.591 lat (msec) : 20=0.04%, 50=99.60%, 250=0.36% 00:35:11.591 cpu : usr=95.24%, sys=2.51%, ctx=144, majf=0, minf=19 00:35:11.592 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename0: (groupid=0, jobs=1): err= 0: pid=3745705: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=437, BW=1751KiB/s (1793kB/s)(17.2MiB/10081msec) 00:35:11.592 slat (usec): min=8, max=111, avg=38.43, stdev=15.35 00:35:11.592 clat (msec): min=26, max=260, avg=36.22, stdev=13.97 00:35:11.592 lat (msec): min=26, max=260, avg=36.26, stdev=13.97 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.592 | 99.99th=[ 259] 00:35:11.592 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1759.20, stdev=238.03, samples=20 00:35:11.592 iops : min= 256, max= 480, avg=439.80, stdev=59.51, samples=20 00:35:11.592 lat (msec) : 50=99.23%, 100=0.41%, 500=0.36% 00:35:11.592 cpu : usr=94.85%, sys=2.80%, ctx=120, majf=0, minf=25 00:35:11.592 IO depths : 1=4.0%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename0: (groupid=0, jobs=1): err= 0: pid=3745706: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.592 slat (usec): min=9, max=301, avg=37.73, stdev=12.77 00:35:11.592 clat (msec): min=30, max=259, avg=36.20, stdev=13.96 00:35:11.592 lat (msec): min=30, max=259, avg=36.24, stdev=13.96 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 45], 99.50th=[ 52], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.592 | 99.99th=[ 259] 00:35:11.592 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.592 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.592 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.592 cpu : usr=96.70%, sys=2.15%, ctx=248, majf=0, minf=28 00:35:11.592 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename0: (groupid=0, jobs=1): err= 0: pid=3745707: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=437, BW=1748KiB/s (1790kB/s)(17.2MiB/10067msec) 00:35:11.592 slat (usec): min=12, max=110, avg=36.87, stdev=17.49 00:35:11.592 clat (msec): min=31, max=250, avg=36.28, stdev=13.62 00:35:11.592 lat (msec): min=31, max=250, avg=36.32, stdev=13.62 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 47], 99.50th=[ 73], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.592 | 99.99th=[ 251] 00:35:11.592 bw ( KiB/s): min= 1408, max= 1920, per=4.27%, avg=1798.74, stdev=156.61, samples=19 00:35:11.592 iops : min= 352, max= 480, avg=449.68, stdev=39.15, samples=19 00:35:11.592 lat (msec) : 50=99.27%, 100=0.36%, 500=0.36% 00:35:11.592 cpu : usr=97.88%, sys=1.68%, ctx=18, majf=0, minf=28 00:35:11.592 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename0: (groupid=0, jobs=1): err= 0: pid=3745708: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=436, BW=1748KiB/s (1790kB/s)(17.2MiB/10069msec) 00:35:11.592 slat (nsec): min=8192, max=73598, avg=31499.78, stdev=9047.94 00:35:11.592 clat (msec): min=32, max=250, avg=36.32, stdev=13.62 00:35:11.592 lat (msec): min=32, max=250, avg=36.35, stdev=13.62 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 47], 99.50th=[ 75], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.592 | 99.99th=[ 251] 00:35:11.592 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1753.60, stdev=252.95, samples=20 00:35:11.592 iops : min= 224, max= 480, avg=438.40, stdev=63.24, samples=20 00:35:11.592 lat (msec) : 50=99.27%, 100=0.36%, 500=0.36% 00:35:11.592 cpu : usr=98.06%, sys=1.50%, ctx=18, majf=0, minf=20 00:35:11.592 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename1: (groupid=0, jobs=1): err= 0: pid=3745709: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.592 slat (usec): min=13, max=132, avg=45.41, stdev=20.46 00:35:11.592 clat (msec): min=27, max=259, avg=36.12, stdev=13.96 00:35:11.592 lat (msec): min=28, max=259, avg=36.17, stdev=13.96 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.592 | 99.99th=[ 259] 00:35:11.592 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.592 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.592 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.592 cpu : usr=97.25%, sys=1.86%, ctx=54, majf=0, minf=23 00:35:11.592 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename1: (groupid=0, jobs=1): err= 0: pid=3745710: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.2MiB/10071msec) 00:35:11.592 slat (usec): min=8, max=155, avg=55.94, stdev=27.08 00:35:11.592 clat (msec): min=17, max=250, avg=36.24, stdev=13.50 00:35:11.592 lat (msec): min=17, max=250, avg=36.30, stdev=13.49 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 49], 99.50th=[ 79], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.592 | 99.99th=[ 251] 00:35:11.592 bw ( KiB/s): min= 1024, max= 1920, per=4.16%, avg=1751.80, stdev=236.42, samples=20 00:35:11.592 iops : min= 256, max= 480, avg=437.95, stdev=59.10, samples=20 00:35:11.592 lat (msec) : 20=0.39%, 50=98.70%, 100=0.55%, 250=0.05%, 500=0.32% 00:35:11.592 cpu : usr=95.60%, sys=2.29%, ctx=206, majf=0, minf=27 00:35:11.592 IO depths : 1=0.7%, 2=6.9%, 4=24.8%, 8=55.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename1: (groupid=0, jobs=1): err= 0: pid=3745711: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10067msec) 00:35:11.592 slat (usec): min=8, max=117, avg=29.66, stdev=21.64 00:35:11.592 clat (msec): min=14, max=250, avg=36.32, stdev=13.80 00:35:11.592 lat (msec): min=14, max=250, avg=36.35, stdev=13.80 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 58], 99.50th=[ 73], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.592 | 99.99th=[ 251] 00:35:11.592 bw ( KiB/s): min= 1408, max= 2048, per=4.28%, avg=1802.95, stdev=164.56, samples=19 00:35:11.592 iops : min= 352, max= 512, avg=450.74, stdev=41.14, samples=19 00:35:11.592 lat (msec) : 20=0.41%, 50=98.46%, 100=0.77%, 500=0.36% 00:35:11.592 cpu : usr=98.05%, sys=1.44%, ctx=58, majf=0, minf=29 00:35:11.592 IO depths : 1=2.0%, 2=4.5%, 4=10.1%, 8=69.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:35:11.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 complete : 0=0.0%, 4=91.2%, 8=6.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.592 issued rwts: total=4410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.592 filename1: (groupid=0, jobs=1): err= 0: pid=3745712: Fri Jul 12 13:43:06 2024 00:35:11.592 read: IOPS=435, BW=1742KiB/s (1784kB/s)(17.1MiB/10067msec) 00:35:11.592 slat (usec): min=8, max=111, avg=37.54, stdev=15.25 00:35:11.592 clat (msec): min=32, max=259, avg=36.38, stdev=14.56 00:35:11.592 lat (msec): min=32, max=260, avg=36.41, stdev=14.56 00:35:11.592 clat percentiles (msec): 00:35:11.592 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.592 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.592 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.592 | 99.00th=[ 45], 99.50th=[ 105], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.592 | 99.99th=[ 259] 00:35:11.592 bw ( KiB/s): min= 896, max= 1920, per=4.15%, avg=1747.20, stdev=256.92, samples=20 00:35:11.592 iops : min= 224, max= 480, avg=436.80, stdev=64.23, samples=20 00:35:11.592 lat (msec) : 50=99.27%, 250=0.36%, 500=0.36% 00:35:11.592 cpu : usr=97.55%, sys=1.77%, ctx=107, majf=0, minf=24 00:35:11.593 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename1: (groupid=0, jobs=1): err= 0: pid=3745713: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=462, BW=1850KiB/s (1895kB/s)(18.2MiB/10075msec) 00:35:11.593 slat (usec): min=4, max=302, avg=20.00, stdev=18.49 00:35:11.593 clat (msec): min=14, max=224, avg=34.42, stdev=12.91 00:35:11.593 lat (msec): min=14, max=224, avg=34.44, stdev=12.91 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 28], 00:35:11.593 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 45], 00:35:11.593 | 99.00th=[ 62], 99.50th=[ 114], 99.90th=[ 224], 99.95th=[ 226], 00:35:11.593 | 99.99th=[ 226] 00:35:11.593 bw ( KiB/s): min= 1120, max= 2192, per=4.41%, avg=1857.60, stdev=271.38, samples=20 00:35:11.593 iops : min= 280, max= 548, avg=464.40, stdev=67.84, samples=20 00:35:11.593 lat (msec) : 20=0.73%, 50=96.37%, 100=2.38%, 250=0.52% 00:35:11.593 cpu : usr=94.58%, sys=2.99%, ctx=350, majf=0, minf=28 00:35:11.593 IO depths : 1=0.1%, 2=0.3%, 4=3.9%, 8=80.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=89.1%, 8=8.3%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename1: (groupid=0, jobs=1): err= 0: pid=3745714: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=436, BW=1746KiB/s (1788kB/s)(17.2MiB/10079msec) 00:35:11.593 slat (nsec): min=4051, max=86993, avg=36922.41, stdev=12835.91 00:35:11.593 clat (msec): min=29, max=260, avg=36.30, stdev=14.21 00:35:11.593 lat (msec): min=29, max=260, avg=36.34, stdev=14.21 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 81], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.593 | 99.99th=[ 259] 00:35:11.593 bw ( KiB/s): min= 1013, max= 1920, per=4.16%, avg=1753.05, stdev=240.70, samples=20 00:35:11.593 iops : min= 253, max= 480, avg=438.25, stdev=60.22, samples=20 00:35:11.593 lat (msec) : 50=99.27%, 100=0.36%, 500=0.36% 00:35:11.593 cpu : usr=98.22%, sys=1.34%, ctx=13, majf=0, minf=21 00:35:11.593 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename1: (groupid=0, jobs=1): err= 0: pid=3745715: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.593 slat (nsec): min=8241, max=86782, avg=32104.13, stdev=12091.99 00:35:11.593 clat (msec): min=31, max=259, avg=36.26, stdev=13.91 00:35:11.593 lat (msec): min=31, max=259, avg=36.29, stdev=13.92 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 52], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.593 | 99.99th=[ 259] 00:35:11.593 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.593 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.593 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.593 cpu : usr=98.40%, sys=1.18%, ctx=13, majf=0, minf=19 00:35:11.593 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename1: (groupid=0, jobs=1): err= 0: pid=3745716: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=440, BW=1761KiB/s (1803kB/s)(17.4MiB/10106msec) 00:35:11.593 slat (usec): min=4, max=114, avg=25.45, stdev=14.38 00:35:11.593 clat (msec): min=20, max=251, avg=36.14, stdev=13.40 00:35:11.593 lat (msec): min=20, max=251, avg=36.17, stdev=13.40 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 47], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.593 | 99.99th=[ 251] 00:35:11.593 bw ( KiB/s): min= 1024, max= 1920, per=4.21%, avg=1772.80, stdev=232.25, samples=20 00:35:11.593 iops : min= 256, max= 480, avg=443.20, stdev=58.06, samples=20 00:35:11.593 lat (msec) : 50=99.64%, 500=0.36% 00:35:11.593 cpu : usr=95.08%, sys=2.96%, ctx=69, majf=0, minf=35 00:35:11.593 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename2: (groupid=0, jobs=1): err= 0: pid=3745717: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=451, BW=1806KiB/s (1849kB/s)(17.7MiB/10020msec) 00:35:11.593 slat (usec): min=4, max=119, avg=40.50, stdev=28.17 00:35:11.593 clat (msec): min=3, max=186, avg=35.07, stdev=10.56 00:35:11.593 lat (msec): min=3, max=186, avg=35.11, stdev=10.55 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 45], 99.90th=[ 186], 99.95th=[ 186], 00:35:11.593 | 99.99th=[ 186] 00:35:11.593 bw ( KiB/s): min= 1024, max= 2533, per=4.28%, avg=1803.25, stdev=295.64, samples=20 00:35:11.593 iops : min= 256, max= 633, avg=450.80, stdev=73.88, samples=20 00:35:11.593 lat (msec) : 4=0.33%, 10=1.35%, 20=0.35%, 50=97.61%, 250=0.35% 00:35:11.593 cpu : usr=91.70%, sys=4.16%, ctx=238, majf=0, minf=20 00:35:11.593 IO depths : 1=6.0%, 2=12.1%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename2: (groupid=0, jobs=1): err= 0: pid=3745718: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10080msec) 00:35:11.593 slat (usec): min=8, max=115, avg=27.20, stdev=19.96 00:35:11.593 clat (msec): min=29, max=259, avg=36.30, stdev=13.95 00:35:11.593 lat (msec): min=30, max=259, avg=36.33, stdev=13.95 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.593 | 99.99th=[ 259] 00:35:11.593 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.593 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.593 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.593 cpu : usr=98.17%, sys=1.39%, ctx=18, majf=0, minf=25 00:35:11.593 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename2: (groupid=0, jobs=1): err= 0: pid=3745719: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=437, BW=1748KiB/s (1790kB/s)(17.2MiB/10067msec) 00:35:11.593 slat (nsec): min=8210, max=68302, avg=29742.81, stdev=10617.32 00:35:11.593 clat (msec): min=17, max=259, avg=36.34, stdev=14.24 00:35:11.593 lat (msec): min=17, max=259, avg=36.37, stdev=14.24 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 54], 99.50th=[ 75], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.593 | 99.99th=[ 259] 00:35:11.593 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1753.60, stdev=249.90, samples=20 00:35:11.593 iops : min= 224, max= 480, avg=438.40, stdev=62.47, samples=20 00:35:11.593 lat (msec) : 20=0.11%, 50=98.61%, 100=0.91%, 500=0.36% 00:35:11.593 cpu : usr=95.16%, sys=2.64%, ctx=149, majf=0, minf=25 00:35:11.593 IO depths : 1=3.6%, 2=9.7%, 4=24.6%, 8=53.2%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:11.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.593 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.593 filename2: (groupid=0, jobs=1): err= 0: pid=3745720: Fri Jul 12 13:43:06 2024 00:35:11.593 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.593 slat (usec): min=13, max=112, avg=38.84, stdev=13.64 00:35:11.593 clat (msec): min=30, max=259, avg=36.18, stdev=13.96 00:35:11.593 lat (msec): min=30, max=260, avg=36.22, stdev=13.96 00:35:11.593 clat percentiles (msec): 00:35:11.593 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.593 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.593 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.593 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.593 | 99.99th=[ 259] 00:35:11.593 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.593 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.594 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.594 cpu : usr=98.31%, sys=1.24%, ctx=15, majf=0, minf=19 00:35:11.594 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.594 filename2: (groupid=0, jobs=1): err= 0: pid=3745721: Fri Jul 12 13:43:06 2024 00:35:11.594 read: IOPS=436, BW=1747KiB/s (1789kB/s)(17.2MiB/10072msec) 00:35:11.594 slat (usec): min=6, max=128, avg=48.20, stdev=21.59 00:35:11.594 clat (msec): min=32, max=260, avg=36.19, stdev=14.16 00:35:11.594 lat (msec): min=32, max=260, avg=36.24, stdev=14.16 00:35:11.594 clat percentiles (msec): 00:35:11.594 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.594 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.594 | 99.00th=[ 45], 99.50th=[ 74], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.594 | 99.99th=[ 262] 00:35:11.594 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1753.75, stdev=252.90, samples=20 00:35:11.594 iops : min= 224, max= 480, avg=438.40, stdev=63.24, samples=20 00:35:11.594 lat (msec) : 50=99.27%, 100=0.36%, 500=0.36% 00:35:11.594 cpu : usr=97.82%, sys=1.70%, ctx=17, majf=0, minf=23 00:35:11.594 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.594 filename2: (groupid=0, jobs=1): err= 0: pid=3745722: Fri Jul 12 13:43:06 2024 00:35:11.594 read: IOPS=436, BW=1747KiB/s (1789kB/s)(17.2MiB/10074msec) 00:35:11.594 slat (nsec): min=6492, max=69064, avg=30702.55, stdev=9939.12 00:35:11.594 clat (msec): min=32, max=250, avg=36.36, stdev=13.69 00:35:11.594 lat (msec): min=32, max=250, avg=36.39, stdev=13.69 00:35:11.594 clat percentiles (msec): 00:35:11.594 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.594 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.594 | 99.00th=[ 47], 99.50th=[ 81], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.594 | 99.99th=[ 251] 00:35:11.594 bw ( KiB/s): min= 1024, max= 1920, per=4.16%, avg=1753.75, stdev=238.87, samples=20 00:35:11.594 iops : min= 256, max= 480, avg=438.40, stdev=59.73, samples=20 00:35:11.594 lat (msec) : 50=99.27%, 100=0.36%, 500=0.36% 00:35:11.594 cpu : usr=97.83%, sys=1.68%, ctx=25, majf=0, minf=27 00:35:11.594 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.594 filename2: (groupid=0, jobs=1): err= 0: pid=3745723: Fri Jul 12 13:43:06 2024 00:35:11.594 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10081msec) 00:35:11.594 slat (usec): min=8, max=123, avg=31.48, stdev=12.52 00:35:11.594 clat (msec): min=29, max=259, avg=36.27, stdev=13.93 00:35:11.594 lat (msec): min=30, max=259, avg=36.30, stdev=13.93 00:35:11.594 clat percentiles (msec): 00:35:11.594 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:11.594 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.594 | 99.00th=[ 45], 99.50th=[ 53], 99.90th=[ 259], 99.95th=[ 259], 00:35:11.594 | 99.99th=[ 259] 00:35:11.594 bw ( KiB/s): min= 1024, max= 1920, per=4.18%, avg=1760.00, stdev=238.11, samples=20 00:35:11.594 iops : min= 256, max= 480, avg=440.00, stdev=59.53, samples=20 00:35:11.594 lat (msec) : 50=99.28%, 100=0.36%, 500=0.36% 00:35:11.594 cpu : usr=97.20%, sys=2.17%, ctx=52, majf=0, minf=16 00:35:11.594 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:11.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.594 filename2: (groupid=0, jobs=1): err= 0: pid=3745724: Fri Jul 12 13:43:06 2024 00:35:11.594 read: IOPS=435, BW=1742KiB/s (1784kB/s)(17.1MiB/10069msec) 00:35:11.594 slat (usec): min=8, max=105, avg=31.32, stdev=16.26 00:35:11.594 clat (msec): min=17, max=250, avg=36.48, stdev=14.03 00:35:11.594 lat (msec): min=17, max=250, avg=36.51, stdev=14.03 00:35:11.594 clat percentiles (msec): 00:35:11.594 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:11.594 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:11.594 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 44], 95.00th=[ 44], 00:35:11.594 | 99.00th=[ 56], 99.50th=[ 93], 99.90th=[ 251], 99.95th=[ 251], 00:35:11.594 | 99.99th=[ 251] 00:35:11.594 bw ( KiB/s): min= 1008, max= 1920, per=4.15%, avg=1748.15, stdev=240.44, samples=20 00:35:11.594 iops : min= 252, max= 480, avg=437.00, stdev=60.15, samples=20 00:35:11.594 lat (msec) : 20=0.09%, 50=98.22%, 100=1.28%, 250=0.09%, 500=0.32% 00:35:11.594 cpu : usr=96.25%, sys=2.16%, ctx=49, majf=0, minf=29 00:35:11.594 IO depths : 1=0.1%, 2=5.8%, 4=23.3%, 8=58.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:11.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.594 issued rwts: total=4386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:11.594 00:35:11.594 Run status group 0 (all jobs): 00:35:11.594 READ: bw=41.1MiB/s (43.1MB/s), 1742KiB/s-1850KiB/s (1784kB/s-1895kB/s), io=416MiB (436MB), run=10010-10106msec 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 bdev_null0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:11.594 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 [2024-07-12 13:43:07.306322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 bdev_null1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:11.595 { 00:35:11.595 "params": { 00:35:11.595 "name": "Nvme$subsystem", 00:35:11.595 "trtype": "$TEST_TRANSPORT", 00:35:11.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.595 "adrfam": "ipv4", 00:35:11.595 "trsvcid": "$NVMF_PORT", 00:35:11.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.595 "hdgst": ${hdgst:-false}, 00:35:11.595 "ddgst": ${ddgst:-false} 00:35:11.595 }, 00:35:11.595 "method": "bdev_nvme_attach_controller" 00:35:11.595 } 00:35:11.595 EOF 00:35:11.595 )") 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:11.595 { 00:35:11.595 "params": { 00:35:11.595 "name": "Nvme$subsystem", 00:35:11.595 "trtype": "$TEST_TRANSPORT", 00:35:11.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.595 "adrfam": "ipv4", 00:35:11.595 "trsvcid": "$NVMF_PORT", 00:35:11.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.595 "hdgst": ${hdgst:-false}, 00:35:11.595 "ddgst": ${ddgst:-false} 00:35:11.595 }, 00:35:11.595 "method": "bdev_nvme_attach_controller" 00:35:11.595 } 00:35:11.595 EOF 00:35:11.595 )") 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:11.595 "params": { 00:35:11.595 "name": "Nvme0", 00:35:11.595 "trtype": "tcp", 00:35:11.595 "traddr": "10.0.0.2", 00:35:11.595 "adrfam": "ipv4", 00:35:11.595 "trsvcid": "4420", 00:35:11.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.595 "hdgst": false, 00:35:11.595 "ddgst": false 00:35:11.595 }, 00:35:11.595 "method": "bdev_nvme_attach_controller" 00:35:11.595 },{ 00:35:11.595 "params": { 00:35:11.595 "name": "Nvme1", 00:35:11.595 "trtype": "tcp", 00:35:11.595 "traddr": "10.0.0.2", 00:35:11.595 "adrfam": "ipv4", 00:35:11.595 "trsvcid": "4420", 00:35:11.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.595 "hdgst": false, 00:35:11.595 "ddgst": false 00:35:11.595 }, 00:35:11.595 "method": "bdev_nvme_attach_controller" 00:35:11.595 }' 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:11.595 13:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.595 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:11.595 ... 00:35:11.595 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:11.595 ... 00:35:11.595 fio-3.35 00:35:11.595 Starting 4 threads 00:35:11.595 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.853 00:35:16.853 filename0: (groupid=0, jobs=1): err= 0: pid=3746985: Fri Jul 12 13:43:13 2024 00:35:16.853 read: IOPS=1925, BW=15.0MiB/s (15.8MB/s)(75.3MiB/5002msec) 00:35:16.853 slat (nsec): min=4047, max=37167, avg=12858.00, stdev=4105.36 00:35:16.853 clat (usec): min=1344, max=7094, avg=4114.75, stdev=676.80 00:35:16.853 lat (usec): min=1358, max=7102, avg=4127.60, stdev=676.06 00:35:16.853 clat percentiles (usec): 00:35:16.853 | 1.00th=[ 3163], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3687], 00:35:16.853 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3982], 60.00th=[ 4015], 00:35:16.853 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 5538], 95.00th=[ 5604], 00:35:16.853 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6718], 99.95th=[ 7046], 00:35:16.853 | 99.99th=[ 7111] 00:35:16.853 bw ( KiB/s): min=14752, max=15872, per=24.56%, avg=15365.33, stdev=375.83, samples=9 00:35:16.853 iops : min= 1844, max= 1984, avg=1920.67, stdev=46.98, samples=9 00:35:16.853 lat (msec) : 2=0.05%, 4=55.78%, 10=44.17% 00:35:16.853 cpu : usr=93.86%, sys=5.58%, ctx=8, majf=0, minf=9 00:35:16.853 IO depths : 1=0.1%, 2=1.6%, 4=70.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 issued rwts: total=9633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.853 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.853 filename0: (groupid=0, jobs=1): err= 0: pid=3746986: Fri Jul 12 13:43:13 2024 00:35:16.853 read: IOPS=1964, BW=15.3MiB/s (16.1MB/s)(76.8MiB/5004msec) 00:35:16.853 slat (nsec): min=3924, max=34969, avg=12493.97, stdev=3742.56 00:35:16.853 clat (usec): min=1181, max=7474, avg=4032.33, stdev=683.65 00:35:16.853 lat (usec): min=1194, max=7498, avg=4044.82, stdev=683.46 00:35:16.853 clat percentiles (usec): 00:35:16.853 | 1.00th=[ 2802], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3556], 00:35:16.853 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3982], 00:35:16.853 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 5473], 95.00th=[ 5604], 00:35:16.853 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6783], 99.95th=[ 7439], 00:35:16.853 | 99.99th=[ 7504] 00:35:16.853 bw ( KiB/s): min=15312, max=16032, per=25.13%, avg=15720.00, stdev=234.33, samples=10 00:35:16.853 iops : min= 1914, max= 2004, avg=1965.00, stdev=29.29, samples=10 00:35:16.853 lat (msec) : 2=0.08%, 4=63.54%, 10=36.38% 00:35:16.853 cpu : usr=92.46%, sys=6.66%, ctx=67, majf=0, minf=0 00:35:16.853 IO depths : 1=0.1%, 2=2.8%, 4=69.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 issued rwts: total=9830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.853 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.853 filename1: (groupid=0, jobs=1): err= 0: pid=3746987: Fri Jul 12 13:43:13 2024 00:35:16.853 read: IOPS=2007, BW=15.7MiB/s (16.4MB/s)(78.5MiB/5003msec) 00:35:16.853 slat (nsec): min=3830, max=42952, avg=11893.04, stdev=3943.46 00:35:16.853 clat (usec): min=1130, max=7034, avg=3950.57, stdev=600.68 00:35:16.853 lat (usec): min=1143, max=7046, avg=3962.46, stdev=600.61 00:35:16.853 clat percentiles (usec): 00:35:16.853 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3621], 00:35:16.853 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3982], 00:35:16.853 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4555], 95.00th=[ 5538], 00:35:16.853 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[ 6849], 00:35:16.853 | 99.99th=[ 6980] 00:35:16.853 bw ( KiB/s): min=15616, max=16688, per=25.67%, avg=16059.20, stdev=355.86, samples=10 00:35:16.853 iops : min= 1952, max= 2086, avg=2007.40, stdev=44.48, samples=10 00:35:16.853 lat (msec) : 2=0.06%, 4=62.49%, 10=37.45% 00:35:16.853 cpu : usr=92.24%, sys=6.24%, ctx=137, majf=0, minf=0 00:35:16.853 IO depths : 1=0.1%, 2=2.6%, 4=67.5%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 issued rwts: total=10042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.853 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.853 filename1: (groupid=0, jobs=1): err= 0: pid=3746988: Fri Jul 12 13:43:13 2024 00:35:16.853 read: IOPS=1924, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5002msec) 00:35:16.853 slat (usec): min=3, max=230, avg=14.17, stdev= 6.19 00:35:16.853 clat (usec): min=1415, max=9286, avg=4113.25, stdev=671.75 00:35:16.853 lat (usec): min=1429, max=9298, avg=4127.42, stdev=670.96 00:35:16.853 clat percentiles (usec): 00:35:16.853 | 1.00th=[ 3163], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 00:35:16.853 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4015], 00:35:16.853 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 5538], 95.00th=[ 5604], 00:35:16.853 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 7046], 00:35:16.853 | 99.99th=[ 9241] 00:35:16.853 bw ( KiB/s): min=14944, max=15600, per=24.55%, avg=15360.00, stdev=200.96, samples=9 00:35:16.853 iops : min= 1868, max= 1950, avg=1920.00, stdev=25.12, samples=9 00:35:16.853 lat (msec) : 2=0.09%, 4=59.61%, 10=40.29% 00:35:16.853 cpu : usr=88.88%, sys=8.16%, ctx=256, majf=0, minf=9 00:35:16.853 IO depths : 1=0.1%, 2=1.8%, 4=70.2%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.853 issued rwts: total=9627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.853 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.853 00:35:16.853 Run status group 0 (all jobs): 00:35:16.853 READ: bw=61.1MiB/s (64.1MB/s), 15.0MiB/s-15.7MiB/s (15.8MB/s-16.4MB/s), io=306MiB (321MB), run=5002-5004msec 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.853 00:35:16.853 real 0m24.021s 00:35:16.853 user 4m29.905s 00:35:16.853 sys 0m8.512s 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 ************************************ 00:35:16.853 END TEST fio_dif_rand_params 00:35:16.853 ************************************ 00:35:16.853 13:43:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:16.853 13:43:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:16.853 13:43:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:16.853 13:43:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:16.853 13:43:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.853 ************************************ 00:35:16.853 START TEST fio_dif_digest 00:35:16.853 ************************************ 00:35:16.853 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:16.853 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:16.853 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.854 bdev_null0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.854 [2024-07-12 13:43:13.815240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:16.854 { 00:35:16.854 "params": { 00:35:16.854 "name": "Nvme$subsystem", 00:35:16.854 "trtype": "$TEST_TRANSPORT", 00:35:16.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.854 "adrfam": "ipv4", 00:35:16.854 "trsvcid": "$NVMF_PORT", 00:35:16.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.854 "hdgst": ${hdgst:-false}, 00:35:16.854 "ddgst": ${ddgst:-false} 00:35:16.854 }, 00:35:16.854 "method": "bdev_nvme_attach_controller" 00:35:16.854 } 00:35:16.854 EOF 00:35:16.854 )") 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:16.854 "params": { 00:35:16.854 "name": "Nvme0", 00:35:16.854 "trtype": "tcp", 00:35:16.854 "traddr": "10.0.0.2", 00:35:16.854 "adrfam": "ipv4", 00:35:16.854 "trsvcid": "4420", 00:35:16.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:16.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:16.854 "hdgst": true, 00:35:16.854 "ddgst": true 00:35:16.854 }, 00:35:16.854 "method": "bdev_nvme_attach_controller" 00:35:16.854 }' 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:16.854 13:43:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.854 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:16.854 ... 00:35:16.854 fio-3.35 00:35:16.854 Starting 3 threads 00:35:16.854 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.055 00:35:29.055 filename0: (groupid=0, jobs=1): err= 0: pid=3747859: Fri Jul 12 13:43:24 2024 00:35:29.055 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(260MiB/10044msec) 00:35:29.055 slat (nsec): min=4584, max=83602, avg=22499.66, stdev=5642.95 00:35:29.055 clat (usec): min=8481, max=56959, avg=14466.48, stdev=2373.71 00:35:29.055 lat (usec): min=8505, max=56975, avg=14488.98, stdev=2373.66 00:35:29.055 clat percentiles (usec): 00:35:29.055 | 1.00th=[ 9896], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:35:29.055 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:35:29.055 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:35:29.055 | 99.00th=[17171], 99.50th=[17957], 99.90th=[55313], 99.95th=[55837], 00:35:29.055 | 99.99th=[56886] 00:35:29.055 bw ( KiB/s): min=24881, max=30464, per=33.66%, avg=26549.65, stdev=1126.14, samples=20 00:35:29.055 iops : min= 194, max= 238, avg=207.40, stdev= 8.83, samples=20 00:35:29.055 lat (msec) : 10=1.20%, 20=98.41%, 50=0.19%, 100=0.19% 00:35:29.055 cpu : usr=93.23%, sys=6.19%, ctx=51, majf=0, minf=187 00:35:29.055 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.055 filename0: (groupid=0, jobs=1): err= 0: pid=3747860: Fri Jul 12 13:43:24 2024 00:35:29.055 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(273MiB/10046msec) 00:35:29.055 slat (nsec): min=4532, max=91296, avg=15910.50, stdev=5064.46 00:35:29.055 clat (usec): min=8354, max=56480, avg=13772.15, stdev=2763.88 00:35:29.055 lat (usec): min=8363, max=56501, avg=13788.07, stdev=2763.82 00:35:29.055 clat percentiles (usec): 00:35:29.055 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:35:29.055 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:29.055 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:35:29.055 | 99.00th=[16319], 99.50th=[18744], 99.90th=[55837], 99.95th=[55837], 00:35:29.055 | 99.99th=[56361] 00:35:29.055 bw ( KiB/s): min=24064, max=30464, per=35.37%, avg=27904.00, stdev=1171.67, samples=20 00:35:29.055 iops : min= 188, max= 238, avg=218.00, stdev= 9.15, samples=20 00:35:29.055 lat (msec) : 10=1.15%, 20=98.49%, 100=0.37% 00:35:29.055 cpu : usr=92.71%, sys=6.81%, ctx=21, majf=0, minf=107 00:35:29.055 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.055 filename0: (groupid=0, jobs=1): err= 0: pid=3747861: Fri Jul 12 13:43:24 2024 00:35:29.055 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(242MiB/10045msec) 00:35:29.055 slat (nsec): min=4782, max=50030, avg=15809.07, stdev=4639.16 00:35:29.055 clat (usec): min=8798, max=59309, avg=15548.47, stdev=3376.67 00:35:29.055 lat (usec): min=8830, max=59328, avg=15564.28, stdev=3376.51 00:35:29.055 clat percentiles (usec): 00:35:29.055 | 1.00th=[10945], 5.00th=[13304], 10.00th=[13960], 20.00th=[14353], 00:35:29.055 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15401], 60.00th=[15664], 00:35:29.055 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:35:29.055 | 99.00th=[18482], 99.50th=[52167], 99.90th=[58459], 99.95th=[59507], 00:35:29.055 | 99.99th=[59507] 00:35:29.055 bw ( KiB/s): min=22784, max=25600, per=31.32%, avg=24706.35, stdev=774.14, samples=20 00:35:29.055 iops : min= 178, max= 200, avg=193.00, stdev= 6.07, samples=20 00:35:29.055 lat (msec) : 10=0.26%, 20=99.02%, 50=0.16%, 100=0.57% 00:35:29.055 cpu : usr=93.54%, sys=5.99%, ctx=26, majf=0, minf=185 00:35:29.055 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.055 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.055 00:35:29.055 Run status group 0 (all jobs): 00:35:29.055 READ: bw=77.0MiB/s (80.8MB/s), 24.1MiB/s-27.1MiB/s (25.2MB/s-28.5MB/s), io=774MiB (811MB), run=10044-10046msec 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.055 00:35:29.055 real 0m11.052s 00:35:29.055 user 0m29.090s 00:35:29.055 sys 0m2.167s 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:29.055 13:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.055 ************************************ 00:35:29.055 END TEST fio_dif_digest 00:35:29.055 ************************************ 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:29.055 13:43:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:29.055 13:43:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:29.055 rmmod nvme_tcp 00:35:29.055 rmmod nvme_fabrics 00:35:29.055 rmmod nvme_keyring 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3741824 ']' 00:35:29.055 13:43:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3741824 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3741824 ']' 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3741824 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3741824 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3741824' 00:35:29.055 killing process with pid 3741824 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3741824 00:35:29.055 13:43:24 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3741824 00:35:29.055 13:43:25 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:29.055 13:43:25 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.055 Waiting for block devices as requested 00:35:29.055 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.055 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.055 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.314 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.314 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.314 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.572 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.572 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.572 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:29.831 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.831 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.831 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.831 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:30.090 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:30.090 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.090 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.348 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.348 13:43:27 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:30.348 13:43:27 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:30.348 13:43:27 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:30.348 13:43:27 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:30.348 13:43:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.348 13:43:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.348 13:43:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.881 13:43:29 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:32.881 00:35:32.881 real 1m6.471s 00:35:32.881 user 6m25.487s 00:35:32.881 sys 0m20.352s 00:35:32.881 13:43:29 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.881 13:43:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.881 ************************************ 00:35:32.881 END TEST nvmf_dif 00:35:32.881 ************************************ 00:35:32.881 13:43:29 -- common/autotest_common.sh@1142 -- # return 0 00:35:32.881 13:43:29 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.881 13:43:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:32.881 13:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.881 13:43:29 -- common/autotest_common.sh@10 -- # set +x 00:35:32.881 ************************************ 00:35:32.881 START TEST nvmf_abort_qd_sizes 00:35:32.881 ************************************ 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.881 * Looking for test storage... 00:35:32.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:32.881 13:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:34.784 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:34.785 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:34.785 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:34.785 Found net devices under 0000:09:00.0: cvl_0_0 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:34.785 Found net devices under 0000:09:00.1: cvl_0_1 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:34.785 13:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:34.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:35:34.785 00:35:34.785 --- 10.0.0.2 ping statistics --- 00:35:34.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.785 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:35:34.785 00:35:34.785 --- 10.0.0.1 ping statistics --- 00:35:34.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.785 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:34.785 13:43:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:35.717 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.717 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.717 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.717 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.976 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.976 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.976 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.976 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.976 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:36.935 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:36.935 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3752647 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3752647 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3752647 ']' 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:37.193 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.193 [2024-07-12 13:43:34.473202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:35:37.193 [2024-07-12 13:43:34.473266] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.193 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.193 [2024-07-12 13:43:34.509193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:37.193 [2024-07-12 13:43:34.534432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.193 [2024-07-12 13:43:34.617800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.193 [2024-07-12 13:43:34.617870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.193 [2024-07-12 13:43:34.617882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.193 [2024-07-12 13:43:34.617893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.193 [2024-07-12 13:43:34.617916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.193 [2024-07-12 13:43:34.618008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.193 [2024-07-12 13:43:34.618113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.193 [2024-07-12 13:43:34.618204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.193 [2024-07-12 13:43:34.618210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.451 13:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.451 ************************************ 00:35:37.451 START TEST spdk_target_abort 00:35:37.451 ************************************ 00:35:37.451 13:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:37.451 13:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:37.451 13:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:37.452 13:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.452 13:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.730 spdk_targetn1 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.730 [2024-07-12 13:43:37.617965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.730 [2024-07-12 13:43:37.650226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.730 13:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.730 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.004 Initializing NVMe Controllers 00:35:44.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:44.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:44.004 Initialization complete. Launching workers. 00:35:44.005 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11318, failed: 0 00:35:44.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1283, failed to submit 10035 00:35:44.005 success 765, unsuccess 518, failed 0 00:35:44.005 13:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.005 13:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:44.005 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.281 Initializing NVMe Controllers 00:35:47.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:47.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:47.281 Initialization complete. Launching workers. 00:35:47.281 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8528, failed: 0 00:35:47.281 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7295 00:35:47.281 success 325, unsuccess 908, failed 0 00:35:47.281 13:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:47.281 13:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.281 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.560 Initializing NVMe Controllers 00:35:50.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:50.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:50.560 Initialization complete. Launching workers. 00:35:50.560 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30135, failed: 0 00:35:50.560 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2723, failed to submit 27412 00:35:50.560 success 517, unsuccess 2206, failed 0 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.560 13:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3752647 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3752647 ']' 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3752647 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752647 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752647' 00:35:51.491 killing process with pid 3752647 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3752647 00:35:51.491 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3752647 00:35:51.749 00:35:51.749 real 0m14.208s 00:35:51.749 user 0m53.348s 00:35:51.749 sys 0m2.801s 00:35:51.749 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:51.749 13:43:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.749 ************************************ 00:35:51.749 END TEST spdk_target_abort 00:35:51.749 ************************************ 00:35:51.749 13:43:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:51.749 13:43:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:51.749 13:43:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:51.749 13:43:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:51.749 13:43:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:51.749 ************************************ 00:35:51.749 START TEST kernel_target_abort 00:35:51.749 ************************************ 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:51.749 13:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.679 Waiting for block devices as requested 00:35:52.679 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:52.938 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:52.938 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:52.938 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:52.938 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:53.196 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.196 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.196 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.196 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.454 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:53.454 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:53.454 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:53.713 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:53.713 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:53.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.971 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.971 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:53.971 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:54.229 No valid GPT data, bailing 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:54.229 00:35:54.229 Discovery Log Number of Records 2, Generation counter 2 00:35:54.229 =====Discovery Log Entry 0====== 00:35:54.229 trtype: tcp 00:35:54.229 adrfam: ipv4 00:35:54.229 subtype: current discovery subsystem 00:35:54.229 treq: not specified, sq flow control disable supported 00:35:54.229 portid: 1 00:35:54.229 trsvcid: 4420 00:35:54.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:54.229 traddr: 10.0.0.1 00:35:54.229 eflags: none 00:35:54.229 sectype: none 00:35:54.229 =====Discovery Log Entry 1====== 00:35:54.229 trtype: tcp 00:35:54.229 adrfam: ipv4 00:35:54.229 subtype: nvme subsystem 00:35:54.229 treq: not specified, sq flow control disable supported 00:35:54.229 portid: 1 00:35:54.229 trsvcid: 4420 00:35:54.229 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:54.229 traddr: 10.0.0.1 00:35:54.229 eflags: none 00:35:54.229 sectype: none 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:54.229 13:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.229 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.550 Initializing NVMe Controllers 00:35:57.550 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:57.550 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:57.550 Initialization complete. Launching workers. 00:35:57.550 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39556, failed: 0 00:35:57.550 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39556, failed to submit 0 00:35:57.550 success 0, unsuccess 39556, failed 0 00:35:57.550 13:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.550 13:43:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.550 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.832 Initializing NVMe Controllers 00:36:00.832 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.832 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.832 Initialization complete. Launching workers. 00:36:00.832 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78685, failed: 0 00:36:00.832 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19826, failed to submit 58859 00:36:00.832 success 0, unsuccess 19826, failed 0 00:36:00.832 13:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.832 13:43:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.832 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.113 Initializing NVMe Controllers 00:36:04.113 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:04.113 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:04.113 Initialization complete. Launching workers. 00:36:04.113 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75649, failed: 0 00:36:04.113 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18890, failed to submit 56759 00:36:04.113 success 0, unsuccess 18890, failed 0 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:04.113 13:44:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.680 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.680 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.680 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.680 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.680 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.940 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.940 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.940 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.940 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:05.871 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:05.871 00:36:05.871 real 0m14.276s 00:36:05.871 user 0m5.237s 00:36:05.871 sys 0m3.332s 00:36:05.871 13:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:05.871 13:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.871 ************************************ 00:36:05.871 END TEST kernel_target_abort 00:36:05.871 ************************************ 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:05.871 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:05.871 rmmod nvme_tcp 00:36:06.129 rmmod nvme_fabrics 00:36:06.129 rmmod nvme_keyring 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3752647 ']' 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3752647 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3752647 ']' 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3752647 00:36:06.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3752647) - No such process 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3752647 is not found' 00:36:06.129 Process with pid 3752647 is not found 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:06.129 13:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:07.067 Waiting for block devices as requested 00:36:07.067 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.327 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.327 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.327 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.586 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.586 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.586 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.586 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.845 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:07.845 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.845 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:08.104 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:08.104 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:08.104 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:08.104 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.364 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.364 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.364 13:44:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.901 13:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.901 00:36:10.901 real 0m37.989s 00:36:10.901 user 1m0.695s 00:36:10.901 sys 0m9.583s 00:36:10.901 13:44:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:10.901 13:44:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:10.901 ************************************ 00:36:10.901 END TEST nvmf_abort_qd_sizes 00:36:10.901 ************************************ 00:36:10.901 13:44:07 -- common/autotest_common.sh@1142 -- # return 0 00:36:10.901 13:44:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:10.901 13:44:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:10.901 13:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:10.901 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:36:10.901 ************************************ 00:36:10.901 START TEST keyring_file 00:36:10.901 ************************************ 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:10.901 * Looking for test storage... 00:36:10.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.901 13:44:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.901 13:44:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.901 13:44:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.901 13:44:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.901 13:44:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.901 13:44:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.901 13:44:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:10.901 13:44:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nqrDEP95vz 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nqrDEP95vz 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nqrDEP95vz 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nqrDEP95vz 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mqIy3wJfnc 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:10.901 13:44:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mqIy3wJfnc 00:36:10.901 13:44:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mqIy3wJfnc 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mqIy3wJfnc 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@30 -- # tgtpid=3758389 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:10.901 13:44:07 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3758389 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3758389 ']' 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.901 13:44:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.901 [2024-07-12 13:44:08.047214] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:36:10.902 [2024-07-12 13:44:08.047311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758389 ] 00:36:10.902 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.902 [2024-07-12 13:44:08.079562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:10.902 [2024-07-12 13:44:08.106119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.902 [2024-07-12 13:44:08.189914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.159 13:44:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:11.160 13:44:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.160 [2024-07-12 13:44:08.429206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.160 null0 00:36:11.160 [2024-07-12 13:44:08.461265] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:11.160 [2024-07-12 13:44:08.461734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:11.160 [2024-07-12 13:44:08.469281] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.160 13:44:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.160 [2024-07-12 13:44:08.477293] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:11.160 request: 00:36:11.160 { 00:36:11.160 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.160 "secure_channel": false, 00:36:11.160 "listen_address": { 00:36:11.160 "trtype": "tcp", 00:36:11.160 "traddr": "127.0.0.1", 00:36:11.160 "trsvcid": "4420" 00:36:11.160 }, 00:36:11.160 "method": "nvmf_subsystem_add_listener", 00:36:11.160 "req_id": 1 00:36:11.160 } 00:36:11.160 Got JSON-RPC error response 00:36:11.160 response: 00:36:11.160 { 00:36:11.160 "code": -32602, 00:36:11.160 "message": "Invalid parameters" 00:36:11.160 } 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:11.160 13:44:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=3758399 00:36:11.160 13:44:08 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:11.160 13:44:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3758399 /var/tmp/bperf.sock 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3758399 ']' 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:11.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:11.160 13:44:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.160 [2024-07-12 13:44:08.521761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:36:11.160 [2024-07-12 13:44:08.521836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758399 ] 00:36:11.160 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.160 [2024-07-12 13:44:08.552470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:11.160 [2024-07-12 13:44:08.577986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.418 [2024-07-12 13:44:08.662173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.418 13:44:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.418 13:44:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:11.418 13:44:08 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:11.418 13:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:11.676 13:44:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mqIy3wJfnc 00:36:11.676 13:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mqIy3wJfnc 00:36:11.934 13:44:09 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:11.934 13:44:09 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:11.934 13:44:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.934 13:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.934 13:44:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.192 13:44:09 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nqrDEP95vz == \/\t\m\p\/\t\m\p\.\n\q\r\D\E\P\9\5\v\z ]] 00:36:12.192 13:44:09 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:12.192 13:44:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:12.192 13:44:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.192 13:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.192 13:44:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.449 13:44:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mqIy3wJfnc == \/\t\m\p\/\t\m\p\.\m\q\I\y\3\w\J\f\n\c ]] 00:36:12.449 13:44:09 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:12.449 13:44:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.449 13:44:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.449 13:44:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.449 13:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.449 13:44:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.707 13:44:09 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:12.707 13:44:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:12.707 13:44:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:12.707 13:44:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.707 13:44:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.707 13:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.707 13:44:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.965 13:44:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:12.965 13:44:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.965 13:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:13.224 [2024-07-12 13:44:10.489240] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:13.224 nvme0n1 00:36:13.224 13:44:10 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:13.224 13:44:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:13.224 13:44:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.224 13:44:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.224 13:44:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.224 13:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.482 13:44:10 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:13.482 13:44:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:13.482 13:44:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:13.482 13:44:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.482 13:44:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.482 13:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.482 13:44:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:13.739 13:44:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:13.739 13:44:11 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.739 Running I/O for 1 seconds... 00:36:15.112 00:36:15.112 Latency(us) 00:36:15.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.112 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:15.112 nvme0n1 : 1.02 5215.21 20.37 0.00 0.00 24285.45 9272.13 30680.56 00:36:15.112 =================================================================================================================== 00:36:15.112 Total : 5215.21 20.37 0.00 0.00 24285.45 9272.13 30680.56 00:36:15.112 0 00:36:15.112 13:44:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:15.112 13:44:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.112 13:44:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.369 13:44:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:15.369 13:44:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:15.369 13:44:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.369 13:44:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.369 13:44:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.369 13:44:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.369 13:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.629 13:44:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:15.629 13:44:12 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:15.629 13:44:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.629 13:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.887 [2024-07-12 13:44:13.187406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:15.887 [2024-07-12 13:44:13.187503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d97b0 (107): Transport endpoint is not connected 00:36:15.887 [2024-07-12 13:44:13.188495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d97b0 (9): Bad file descriptor 00:36:15.887 [2024-07-12 13:44:13.189494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:15.887 [2024-07-12 13:44:13.189512] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:15.887 [2024-07-12 13:44:13.189524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:15.887 request: 00:36:15.887 { 00:36:15.887 "name": "nvme0", 00:36:15.887 "trtype": "tcp", 00:36:15.887 "traddr": "127.0.0.1", 00:36:15.887 "adrfam": "ipv4", 00:36:15.887 "trsvcid": "4420", 00:36:15.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.887 "prchk_reftag": false, 00:36:15.887 "prchk_guard": false, 00:36:15.887 "hdgst": false, 00:36:15.887 "ddgst": false, 00:36:15.887 "psk": "key1", 00:36:15.887 "method": "bdev_nvme_attach_controller", 00:36:15.887 "req_id": 1 00:36:15.887 } 00:36:15.887 Got JSON-RPC error response 00:36:15.887 response: 00:36:15.887 { 00:36:15.887 "code": -5, 00:36:15.887 "message": "Input/output error" 00:36:15.887 } 00:36:15.887 13:44:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:15.887 13:44:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:15.887 13:44:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:15.887 13:44:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:15.887 13:44:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:15.887 13:44:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.887 13:44:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.887 13:44:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.887 13:44:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.887 13:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.143 13:44:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:16.143 13:44:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:16.143 13:44:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:16.143 13:44:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.143 13:44:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.143 13:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.143 13:44:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.401 13:44:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:16.401 13:44:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:16.401 13:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:16.658 13:44:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:16.658 13:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:16.916 13:44:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:16.916 13:44:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:16.916 13:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.174 13:44:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:17.174 13:44:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nqrDEP95vz 00:36:17.174 13:44:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.174 13:44:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.174 13:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.431 [2024-07-12 13:44:14.679540] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nqrDEP95vz': 0100660 00:36:17.431 [2024-07-12 13:44:14.679581] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:17.431 request: 00:36:17.431 { 00:36:17.431 "name": "key0", 00:36:17.431 "path": "/tmp/tmp.nqrDEP95vz", 00:36:17.431 "method": "keyring_file_add_key", 00:36:17.431 "req_id": 1 00:36:17.431 } 00:36:17.431 Got JSON-RPC error response 00:36:17.431 response: 00:36:17.431 { 00:36:17.431 "code": -1, 00:36:17.431 "message": "Operation not permitted" 00:36:17.431 } 00:36:17.431 13:44:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:17.431 13:44:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:17.431 13:44:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:17.431 13:44:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:17.431 13:44:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nqrDEP95vz 00:36:17.431 13:44:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.431 13:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nqrDEP95vz 00:36:17.688 13:44:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nqrDEP95vz 00:36:17.688 13:44:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:17.688 13:44:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.688 13:44:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.688 13:44:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.688 13:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.688 13:44:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.945 13:44:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:17.945 13:44:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:17.945 13:44:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.945 13:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.202 [2024-07-12 13:44:15.421549] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nqrDEP95vz': No such file or directory 00:36:18.202 [2024-07-12 13:44:15.421582] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:18.202 [2024-07-12 13:44:15.421607] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:18.202 [2024-07-12 13:44:15.421618] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:18.203 [2024-07-12 13:44:15.421629] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:18.203 request: 00:36:18.203 { 00:36:18.203 "name": "nvme0", 00:36:18.203 "trtype": "tcp", 00:36:18.203 "traddr": "127.0.0.1", 00:36:18.203 "adrfam": "ipv4", 00:36:18.203 "trsvcid": "4420", 00:36:18.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:18.203 "prchk_reftag": false, 00:36:18.203 "prchk_guard": false, 00:36:18.203 "hdgst": false, 00:36:18.203 "ddgst": false, 00:36:18.203 "psk": "key0", 00:36:18.203 "method": "bdev_nvme_attach_controller", 00:36:18.203 "req_id": 1 00:36:18.203 } 00:36:18.203 Got JSON-RPC error response 00:36:18.203 response: 00:36:18.203 { 00:36:18.203 "code": -19, 00:36:18.203 "message": "No such device" 00:36:18.203 } 00:36:18.203 13:44:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:18.203 13:44:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:18.203 13:44:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:18.203 13:44:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:18.203 13:44:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:18.203 13:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:18.460 13:44:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nJ9OT6TqM8 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:18.460 13:44:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nJ9OT6TqM8 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nJ9OT6TqM8 00:36:18.460 13:44:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.nJ9OT6TqM8 00:36:18.460 13:44:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nJ9OT6TqM8 00:36:18.460 13:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nJ9OT6TqM8 00:36:18.717 13:44:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.717 13:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.974 nvme0n1 00:36:18.974 13:44:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:18.974 13:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.974 13:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.974 13:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.974 13:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.974 13:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.231 13:44:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:19.231 13:44:16 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:19.232 13:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:19.489 13:44:16 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:19.489 13:44:16 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:19.489 13:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.489 13:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.489 13:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.746 13:44:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:19.746 13:44:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:19.746 13:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.746 13:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.746 13:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.746 13:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.746 13:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.004 13:44:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:20.004 13:44:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:20.004 13:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:20.261 13:44:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:20.261 13:44:17 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:20.261 13:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.518 13:44:17 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:20.518 13:44:17 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nJ9OT6TqM8 00:36:20.518 13:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nJ9OT6TqM8 00:36:20.775 13:44:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mqIy3wJfnc 00:36:20.775 13:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mqIy3wJfnc 00:36:21.033 13:44:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.033 13:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.290 nvme0n1 00:36:21.290 13:44:18 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:21.290 13:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:21.548 13:44:18 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:21.548 "subsystems": [ 00:36:21.548 { 00:36:21.548 "subsystem": "keyring", 00:36:21.548 "config": [ 00:36:21.548 { 00:36:21.548 "method": "keyring_file_add_key", 00:36:21.548 "params": { 00:36:21.548 "name": "key0", 00:36:21.548 "path": "/tmp/tmp.nJ9OT6TqM8" 00:36:21.548 } 00:36:21.548 }, 00:36:21.548 { 00:36:21.548 "method": "keyring_file_add_key", 00:36:21.548 "params": { 00:36:21.548 "name": "key1", 00:36:21.548 "path": "/tmp/tmp.mqIy3wJfnc" 00:36:21.548 } 00:36:21.548 } 00:36:21.548 ] 00:36:21.548 }, 00:36:21.548 { 00:36:21.548 "subsystem": "iobuf", 00:36:21.548 "config": [ 00:36:21.548 { 00:36:21.548 "method": "iobuf_set_options", 00:36:21.548 "params": { 00:36:21.548 "small_pool_count": 8192, 00:36:21.548 "large_pool_count": 1024, 00:36:21.548 "small_bufsize": 8192, 00:36:21.548 "large_bufsize": 135168 00:36:21.548 } 00:36:21.548 } 00:36:21.548 ] 00:36:21.548 }, 00:36:21.548 { 00:36:21.548 "subsystem": "sock", 00:36:21.548 "config": [ 00:36:21.548 { 00:36:21.548 "method": "sock_set_default_impl", 00:36:21.548 "params": { 00:36:21.548 "impl_name": "posix" 00:36:21.548 } 00:36:21.548 }, 00:36:21.548 { 00:36:21.548 "method": "sock_impl_set_options", 00:36:21.548 "params": { 00:36:21.548 "impl_name": "ssl", 00:36:21.548 "recv_buf_size": 4096, 00:36:21.548 "send_buf_size": 4096, 00:36:21.548 "enable_recv_pipe": true, 00:36:21.548 "enable_quickack": false, 00:36:21.548 "enable_placement_id": 0, 00:36:21.548 "enable_zerocopy_send_server": true, 00:36:21.548 "enable_zerocopy_send_client": false, 00:36:21.548 "zerocopy_threshold": 0, 00:36:21.548 "tls_version": 0, 00:36:21.548 "enable_ktls": false 00:36:21.548 } 00:36:21.548 }, 00:36:21.548 { 00:36:21.548 "method": "sock_impl_set_options", 00:36:21.548 "params": { 00:36:21.548 "impl_name": "posix", 00:36:21.548 "recv_buf_size": 2097152, 00:36:21.548 "send_buf_size": 2097152, 00:36:21.548 "enable_recv_pipe": true, 00:36:21.548 "enable_quickack": false, 00:36:21.548 "enable_placement_id": 0, 00:36:21.548 "enable_zerocopy_send_server": true, 00:36:21.549 "enable_zerocopy_send_client": false, 00:36:21.549 "zerocopy_threshold": 0, 00:36:21.549 "tls_version": 0, 00:36:21.549 "enable_ktls": false 00:36:21.549 } 00:36:21.549 } 00:36:21.549 ] 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "subsystem": "vmd", 00:36:21.549 "config": [] 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "subsystem": "accel", 00:36:21.549 "config": [ 00:36:21.549 { 00:36:21.549 "method": "accel_set_options", 00:36:21.549 "params": { 00:36:21.549 "small_cache_size": 128, 00:36:21.549 "large_cache_size": 16, 00:36:21.549 "task_count": 2048, 00:36:21.549 "sequence_count": 2048, 00:36:21.549 "buf_count": 2048 00:36:21.549 } 00:36:21.549 } 00:36:21.549 ] 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "subsystem": "bdev", 00:36:21.549 "config": [ 00:36:21.549 { 00:36:21.549 "method": "bdev_set_options", 00:36:21.549 "params": { 00:36:21.549 "bdev_io_pool_size": 65535, 00:36:21.549 "bdev_io_cache_size": 256, 00:36:21.549 "bdev_auto_examine": true, 00:36:21.549 "iobuf_small_cache_size": 128, 00:36:21.549 "iobuf_large_cache_size": 16 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_raid_set_options", 00:36:21.549 "params": { 00:36:21.549 "process_window_size_kb": 1024 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_iscsi_set_options", 00:36:21.549 "params": { 00:36:21.549 "timeout_sec": 30 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_nvme_set_options", 00:36:21.549 "params": { 00:36:21.549 "action_on_timeout": "none", 00:36:21.549 "timeout_us": 0, 00:36:21.549 "timeout_admin_us": 0, 00:36:21.549 "keep_alive_timeout_ms": 10000, 00:36:21.549 "arbitration_burst": 0, 00:36:21.549 "low_priority_weight": 0, 00:36:21.549 "medium_priority_weight": 0, 00:36:21.549 "high_priority_weight": 0, 00:36:21.549 "nvme_adminq_poll_period_us": 10000, 00:36:21.549 "nvme_ioq_poll_period_us": 0, 00:36:21.549 "io_queue_requests": 512, 00:36:21.549 "delay_cmd_submit": true, 00:36:21.549 "transport_retry_count": 4, 00:36:21.549 "bdev_retry_count": 3, 00:36:21.549 "transport_ack_timeout": 0, 00:36:21.549 "ctrlr_loss_timeout_sec": 0, 00:36:21.549 "reconnect_delay_sec": 0, 00:36:21.549 "fast_io_fail_timeout_sec": 0, 00:36:21.549 "disable_auto_failback": false, 00:36:21.549 "generate_uuids": false, 00:36:21.549 "transport_tos": 0, 00:36:21.549 "nvme_error_stat": false, 00:36:21.549 "rdma_srq_size": 0, 00:36:21.549 "io_path_stat": false, 00:36:21.549 "allow_accel_sequence": false, 00:36:21.549 "rdma_max_cq_size": 0, 00:36:21.549 "rdma_cm_event_timeout_ms": 0, 00:36:21.549 "dhchap_digests": [ 00:36:21.549 "sha256", 00:36:21.549 "sha384", 00:36:21.549 "sha512" 00:36:21.549 ], 00:36:21.549 "dhchap_dhgroups": [ 00:36:21.549 "null", 00:36:21.549 "ffdhe2048", 00:36:21.549 "ffdhe3072", 00:36:21.549 "ffdhe4096", 00:36:21.549 "ffdhe6144", 00:36:21.549 "ffdhe8192" 00:36:21.549 ] 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_nvme_attach_controller", 00:36:21.549 "params": { 00:36:21.549 "name": "nvme0", 00:36:21.549 "trtype": "TCP", 00:36:21.549 "adrfam": "IPv4", 00:36:21.549 "traddr": "127.0.0.1", 00:36:21.549 "trsvcid": "4420", 00:36:21.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.549 "prchk_reftag": false, 00:36:21.549 "prchk_guard": false, 00:36:21.549 "ctrlr_loss_timeout_sec": 0, 00:36:21.549 "reconnect_delay_sec": 0, 00:36:21.549 "fast_io_fail_timeout_sec": 0, 00:36:21.549 "psk": "key0", 00:36:21.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.549 "hdgst": false, 00:36:21.549 "ddgst": false 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_nvme_set_hotplug", 00:36:21.549 "params": { 00:36:21.549 "period_us": 100000, 00:36:21.549 "enable": false 00:36:21.549 } 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "method": "bdev_wait_for_examine" 00:36:21.549 } 00:36:21.549 ] 00:36:21.549 }, 00:36:21.549 { 00:36:21.549 "subsystem": "nbd", 00:36:21.549 "config": [] 00:36:21.549 } 00:36:21.549 ] 00:36:21.549 }' 00:36:21.549 13:44:18 keyring_file -- keyring/file.sh@114 -- # killprocess 3758399 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3758399 ']' 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3758399 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758399 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758399' 00:36:21.549 killing process with pid 3758399 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@967 -- # kill 3758399 00:36:21.549 Received shutdown signal, test time was about 1.000000 seconds 00:36:21.549 00:36:21.549 Latency(us) 00:36:21.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.549 =================================================================================================================== 00:36:21.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.549 13:44:18 keyring_file -- common/autotest_common.sh@972 -- # wait 3758399 00:36:21.807 13:44:19 keyring_file -- keyring/file.sh@117 -- # bperfpid=3759748 00:36:21.807 13:44:19 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3759748 /var/tmp/bperf.sock 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3759748 ']' 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.807 13:44:19 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:21.807 13:44:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.807 13:44:19 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:21.807 "subsystems": [ 00:36:21.807 { 00:36:21.807 "subsystem": "keyring", 00:36:21.807 "config": [ 00:36:21.807 { 00:36:21.807 "method": "keyring_file_add_key", 00:36:21.807 "params": { 00:36:21.807 "name": "key0", 00:36:21.807 "path": "/tmp/tmp.nJ9OT6TqM8" 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "keyring_file_add_key", 00:36:21.807 "params": { 00:36:21.807 "name": "key1", 00:36:21.807 "path": "/tmp/tmp.mqIy3wJfnc" 00:36:21.807 } 00:36:21.807 } 00:36:21.807 ] 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "subsystem": "iobuf", 00:36:21.807 "config": [ 00:36:21.807 { 00:36:21.807 "method": "iobuf_set_options", 00:36:21.807 "params": { 00:36:21.807 "small_pool_count": 8192, 00:36:21.807 "large_pool_count": 1024, 00:36:21.807 "small_bufsize": 8192, 00:36:21.807 "large_bufsize": 135168 00:36:21.807 } 00:36:21.807 } 00:36:21.807 ] 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "subsystem": "sock", 00:36:21.807 "config": [ 00:36:21.807 { 00:36:21.807 "method": "sock_set_default_impl", 00:36:21.807 "params": { 00:36:21.807 "impl_name": "posix" 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "sock_impl_set_options", 00:36:21.807 "params": { 00:36:21.807 "impl_name": "ssl", 00:36:21.807 "recv_buf_size": 4096, 00:36:21.807 "send_buf_size": 4096, 00:36:21.807 "enable_recv_pipe": true, 00:36:21.807 "enable_quickack": false, 00:36:21.807 "enable_placement_id": 0, 00:36:21.807 "enable_zerocopy_send_server": true, 00:36:21.807 "enable_zerocopy_send_client": false, 00:36:21.807 "zerocopy_threshold": 0, 00:36:21.807 "tls_version": 0, 00:36:21.807 "enable_ktls": false 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "sock_impl_set_options", 00:36:21.807 "params": { 00:36:21.807 "impl_name": "posix", 00:36:21.807 "recv_buf_size": 2097152, 00:36:21.807 "send_buf_size": 2097152, 00:36:21.807 "enable_recv_pipe": true, 00:36:21.807 "enable_quickack": false, 00:36:21.807 "enable_placement_id": 0, 00:36:21.807 "enable_zerocopy_send_server": true, 00:36:21.807 "enable_zerocopy_send_client": false, 00:36:21.807 "zerocopy_threshold": 0, 00:36:21.807 "tls_version": 0, 00:36:21.807 "enable_ktls": false 00:36:21.807 } 00:36:21.807 } 00:36:21.807 ] 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "subsystem": "vmd", 00:36:21.807 "config": [] 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "subsystem": "accel", 00:36:21.807 "config": [ 00:36:21.807 { 00:36:21.807 "method": "accel_set_options", 00:36:21.807 "params": { 00:36:21.807 "small_cache_size": 128, 00:36:21.807 "large_cache_size": 16, 00:36:21.807 "task_count": 2048, 00:36:21.807 "sequence_count": 2048, 00:36:21.807 "buf_count": 2048 00:36:21.807 } 00:36:21.807 } 00:36:21.807 ] 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "subsystem": "bdev", 00:36:21.807 "config": [ 00:36:21.807 { 00:36:21.807 "method": "bdev_set_options", 00:36:21.807 "params": { 00:36:21.807 "bdev_io_pool_size": 65535, 00:36:21.807 "bdev_io_cache_size": 256, 00:36:21.807 "bdev_auto_examine": true, 00:36:21.807 "iobuf_small_cache_size": 128, 00:36:21.807 "iobuf_large_cache_size": 16 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "bdev_raid_set_options", 00:36:21.807 "params": { 00:36:21.807 "process_window_size_kb": 1024 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "bdev_iscsi_set_options", 00:36:21.807 "params": { 00:36:21.807 "timeout_sec": 30 00:36:21.807 } 00:36:21.807 }, 00:36:21.807 { 00:36:21.807 "method": "bdev_nvme_set_options", 00:36:21.807 "params": { 00:36:21.807 "action_on_timeout": "none", 00:36:21.807 "timeout_us": 0, 00:36:21.807 "timeout_admin_us": 0, 00:36:21.807 "keep_alive_timeout_ms": 10000, 00:36:21.807 "arbitration_burst": 0, 00:36:21.807 "low_priority_weight": 0, 00:36:21.807 "medium_priority_weight": 0, 00:36:21.807 "high_priority_weight": 0, 00:36:21.807 "nvme_adminq_poll_period_us": 10000, 00:36:21.807 "nvme_ioq_poll_period_us": 0, 00:36:21.807 "io_queue_requests": 512, 00:36:21.807 "delay_cmd_submit": true, 00:36:21.808 "transport_retry_count": 4, 00:36:21.808 "bdev_retry_count": 3, 00:36:21.808 "transport_ack_timeout": 0, 00:36:21.808 "ctrlr_loss_timeout_sec": 0, 00:36:21.808 "reconnect_delay_sec": 0, 00:36:21.808 "fast_io_fail_timeout_sec": 0, 00:36:21.808 "disable_auto_failback": false, 00:36:21.808 "generate_uuids": false, 00:36:21.808 "transport_tos": 0, 00:36:21.808 "nvme_error_stat": false, 00:36:21.808 "rdma_srq_size": 0, 00:36:21.808 "io_path_stat": false, 00:36:21.808 "allow_accel_sequence": false, 00:36:21.808 "rdma_max_cq_size": 0, 00:36:21.808 "rdma_cm_event_timeout_ms": 0, 00:36:21.808 "dhchap_digests": [ 00:36:21.808 "sha256", 00:36:21.808 "sha384", 00:36:21.808 "sha512" 00:36:21.808 ], 00:36:21.808 "dhchap_dhgroups": [ 00:36:21.808 "null", 00:36:21.808 "ffdhe2048", 00:36:21.808 "ffdhe3072", 00:36:21.808 "ffdhe4096", 00:36:21.808 "ffdhe6144", 00:36:21.808 "ffdhe8192" 00:36:21.808 ] 00:36:21.808 } 00:36:21.808 }, 00:36:21.808 { 00:36:21.808 "method": "bdev_nvme_attach_controller", 00:36:21.808 "params": { 00:36:21.808 "name": "nvme0", 00:36:21.808 "trtype": "TCP", 00:36:21.808 "adrfam": "IPv4", 00:36:21.808 "traddr": "127.0.0.1", 00:36:21.808 "trsvcid": "4420", 00:36:21.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.808 "prchk_reftag": false, 00:36:21.808 "prchk_guard": false, 00:36:21.808 "ctrlr_loss_timeout_sec": 0, 00:36:21.808 "reconnect_delay_sec": 0, 00:36:21.808 "fast_io_fail_timeout_sec": 0, 00:36:21.808 "psk": "key0", 00:36:21.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.808 "hdgst": false, 00:36:21.808 "ddgst": false 00:36:21.808 } 00:36:21.808 }, 00:36:21.808 { 00:36:21.808 "method": "bdev_nvme_set_hotplug", 00:36:21.808 "params": { 00:36:21.808 "period_us": 100000, 00:36:21.808 "enable": false 00:36:21.808 } 00:36:21.808 }, 00:36:21.808 { 00:36:21.808 "method": "bdev_wait_for_examine" 00:36:21.808 } 00:36:21.808 ] 00:36:21.808 }, 00:36:21.808 { 00:36:21.808 "subsystem": "nbd", 00:36:21.808 "config": [] 00:36:21.808 } 00:36:21.808 ] 00:36:21.808 }' 00:36:21.808 [2024-07-12 13:44:19.161922] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:36:21.808 [2024-07-12 13:44:19.162016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3759748 ] 00:36:21.808 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.808 [2024-07-12 13:44:19.195971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:21.808 [2024-07-12 13:44:19.224432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.065 [2024-07-12 13:44:19.310302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.065 [2024-07-12 13:44:19.493075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:22.997 13:44:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.997 13:44:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:22.997 13:44:20 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.997 13:44:20 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:22.997 13:44:20 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:22.997 13:44:20 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.997 13:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.254 13:44:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:23.254 13:44:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:23.254 13:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.254 13:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.254 13:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.254 13:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.254 13:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.511 13:44:20 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:23.511 13:44:20 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:23.511 13:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:23.511 13:44:20 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:23.768 13:44:21 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:23.768 13:44:21 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:23.768 13:44:21 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nJ9OT6TqM8 /tmp/tmp.mqIy3wJfnc 00:36:23.768 13:44:21 keyring_file -- keyring/file.sh@20 -- # killprocess 3759748 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3759748 ']' 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3759748 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3759748 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3759748' 00:36:23.768 killing process with pid 3759748 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@967 -- # kill 3759748 00:36:23.768 Received shutdown signal, test time was about 1.000000 seconds 00:36:23.768 00:36:23.768 Latency(us) 00:36:23.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.768 =================================================================================================================== 00:36:23.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:23.768 13:44:21 keyring_file -- common/autotest_common.sh@972 -- # wait 3759748 00:36:24.034 13:44:21 keyring_file -- keyring/file.sh@21 -- # killprocess 3758389 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3758389 ']' 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3758389 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758389 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758389' 00:36:24.034 killing process with pid 3758389 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@967 -- # kill 3758389 00:36:24.034 [2024-07-12 13:44:21.354215] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:24.034 13:44:21 keyring_file -- common/autotest_common.sh@972 -- # wait 3758389 00:36:24.291 00:36:24.291 real 0m13.892s 00:36:24.291 user 0m34.441s 00:36:24.291 sys 0m3.240s 00:36:24.291 13:44:21 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:24.291 13:44:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.291 ************************************ 00:36:24.291 END TEST keyring_file 00:36:24.291 ************************************ 00:36:24.549 13:44:21 -- common/autotest_common.sh@1142 -- # return 0 00:36:24.549 13:44:21 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:24.549 13:44:21 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:24.549 13:44:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:24.549 13:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:24.549 13:44:21 -- common/autotest_common.sh@10 -- # set +x 00:36:24.549 ************************************ 00:36:24.549 START TEST keyring_linux 00:36:24.549 ************************************ 00:36:24.549 13:44:21 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:24.549 * Looking for test storage... 00:36:24.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.549 13:44:21 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.549 13:44:21 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.549 13:44:21 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.549 13:44:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.549 13:44:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.549 13:44:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.549 13:44:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:24.549 13:44:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.549 13:44:21 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:24.549 13:44:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.549 13:44:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:24.550 /tmp/:spdk-test:key0 00:36:24.550 13:44:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:24.550 13:44:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:24.550 13:44:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:24.550 /tmp/:spdk-test:key1 00:36:24.550 13:44:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3760216 00:36:24.550 13:44:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:24.550 13:44:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3760216 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3760216 ']' 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:24.550 13:44:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:24.550 [2024-07-12 13:44:21.995030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:36:24.550 [2024-07-12 13:44:21.995125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760216 ] 00:36:24.807 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.807 [2024-07-12 13:44:22.027907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:24.807 [2024-07-12 13:44:22.053765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.807 [2024-07-12 13:44:22.142032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.064 13:44:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:25.064 13:44:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:25.064 13:44:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:25.064 13:44:22 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.064 13:44:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.064 [2024-07-12 13:44:22.384145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.064 null0 00:36:25.064 [2024-07-12 13:44:22.416203] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:25.064 [2024-07-12 13:44:22.416697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:25.064 13:44:22 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.064 13:44:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:25.064 230956280 00:36:25.064 13:44:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:25.064 306706363 00:36:25.064 13:44:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3760224 00:36:25.065 13:44:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:25.065 13:44:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3760224 /var/tmp/bperf.sock 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3760224 ']' 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:25.065 13:44:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.065 [2024-07-12 13:44:22.478200] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc1 initialization... 00:36:25.065 [2024-07-12 13:44:22.478276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760224 ] 00:36:25.065 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.065 [2024-07-12 13:44:22.511680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:25.322 [2024-07-12 13:44:22.540155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.322 [2024-07-12 13:44:22.632888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.322 13:44:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:25.322 13:44:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:25.322 13:44:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:25.322 13:44:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:25.579 13:44:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:25.579 13:44:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:25.835 13:44:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:25.835 13:44:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:26.091 [2024-07-12 13:44:23.493681] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:26.348 nvme0n1 00:36:26.348 13:44:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:26.348 13:44:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:26.348 13:44:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:26.348 13:44:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:26.348 13:44:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.348 13:44:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:26.604 13:44:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:26.604 13:44:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:26.604 13:44:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:26.604 13:44:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:26.604 13:44:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.604 13:44:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:26.604 13:44:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@25 -- # sn=230956280 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 230956280 == \2\3\0\9\5\6\2\8\0 ]] 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 230956280 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:26.879 13:44:24 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:26.879 Running I/O for 1 seconds... 00:36:27.829 00:36:27.829 Latency(us) 00:36:27.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:27.829 nvme0n1 : 1.02 5171.74 20.20 0.00 0.00 24532.08 8738.13 33399.09 00:36:27.829 =================================================================================================================== 00:36:27.829 Total : 5171.74 20.20 0.00 0.00 24532.08 8738.13 33399.09 00:36:27.829 0 00:36:27.829 13:44:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:27.829 13:44:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:28.087 13:44:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:28.087 13:44:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:28.087 13:44:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:28.087 13:44:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:28.087 13:44:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.087 13:44:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:28.346 13:44:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:28.346 13:44:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:28.346 13:44:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:28.346 13:44:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:28.346 13:44:25 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.346 13:44:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.604 [2024-07-12 13:44:25.969116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:28.604 [2024-07-12 13:44:25.969741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e5690 (107): Transport endpoint is not connected 00:36:28.604 [2024-07-12 13:44:25.970732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e5690 (9): Bad file descriptor 00:36:28.604 [2024-07-12 13:44:25.971731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:28.604 [2024-07-12 13:44:25.971748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:28.604 [2024-07-12 13:44:25.971775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:28.604 request: 00:36:28.604 { 00:36:28.604 "name": "nvme0", 00:36:28.604 "trtype": "tcp", 00:36:28.604 "traddr": "127.0.0.1", 00:36:28.604 "adrfam": "ipv4", 00:36:28.604 "trsvcid": "4420", 00:36:28.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.604 "prchk_reftag": false, 00:36:28.604 "prchk_guard": false, 00:36:28.604 "hdgst": false, 00:36:28.604 "ddgst": false, 00:36:28.604 "psk": ":spdk-test:key1", 00:36:28.604 "method": "bdev_nvme_attach_controller", 00:36:28.604 "req_id": 1 00:36:28.604 } 00:36:28.604 Got JSON-RPC error response 00:36:28.604 response: 00:36:28.604 { 00:36:28.604 "code": -5, 00:36:28.604 "message": "Input/output error" 00:36:28.604 } 00:36:28.604 13:44:25 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:28.604 13:44:25 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:28.604 13:44:25 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:28.604 13:44:25 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@33 -- # sn=230956280 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 230956280 00:36:28.604 1 links removed 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@33 -- # sn=306706363 00:36:28.604 13:44:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 306706363 00:36:28.604 1 links removed 00:36:28.604 13:44:26 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3760224 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3760224 ']' 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3760224 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3760224 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3760224' 00:36:28.604 killing process with pid 3760224 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 3760224 00:36:28.604 Received shutdown signal, test time was about 1.000000 seconds 00:36:28.604 00:36:28.604 Latency(us) 00:36:28.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.604 =================================================================================================================== 00:36:28.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.604 13:44:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 3760224 00:36:28.862 13:44:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3760216 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3760216 ']' 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3760216 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3760216 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3760216' 00:36:28.862 killing process with pid 3760216 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 3760216 00:36:28.862 13:44:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 3760216 00:36:29.429 00:36:29.429 real 0m4.866s 00:36:29.429 user 0m9.179s 00:36:29.429 sys 0m1.486s 00:36:29.429 13:44:26 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:29.429 13:44:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:29.429 ************************************ 00:36:29.429 END TEST keyring_linux 00:36:29.429 ************************************ 00:36:29.429 13:44:26 -- common/autotest_common.sh@1142 -- # return 0 00:36:29.429 13:44:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:29.429 13:44:26 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:29.429 13:44:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:29.429 13:44:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:29.429 13:44:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:29.429 13:44:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:29.429 13:44:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:29.429 13:44:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:29.429 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:36:29.429 13:44:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:29.429 13:44:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:29.429 13:44:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:29.429 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:36:31.335 INFO: APP EXITING 00:36:31.335 INFO: killing all VMs 00:36:31.335 INFO: killing vhost app 00:36:31.335 INFO: EXIT DONE 00:36:32.271 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:32.271 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:32.271 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:32.530 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:32.530 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:32.530 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:32.530 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:32.530 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:32.530 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:32.530 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:32.530 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:32.530 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:32.530 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:32.530 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:32.530 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:32.530 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:32.530 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:33.903 Cleaning 00:36:33.903 Removing: /var/run/dpdk/spdk0/config 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:33.903 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:33.903 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:33.903 Removing: /var/run/dpdk/spdk1/config 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:33.903 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:33.903 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:33.903 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:33.903 Removing: /var/run/dpdk/spdk2/config 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:33.903 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:33.903 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:33.903 Removing: /var/run/dpdk/spdk3/config 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:33.903 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:33.903 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:33.903 Removing: /var/run/dpdk/spdk4/config 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:33.903 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:33.903 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:33.903 Removing: /dev/shm/bdev_svc_trace.1 00:36:33.903 Removing: /dev/shm/nvmf_trace.0 00:36:33.903 Removing: /dev/shm/spdk_tgt_trace.pid3441499 00:36:33.903 Removing: /var/run/dpdk/spdk0 00:36:33.903 Removing: /var/run/dpdk/spdk1 00:36:33.903 Removing: /var/run/dpdk/spdk2 00:36:33.903 Removing: /var/run/dpdk/spdk3 00:36:33.903 Removing: /var/run/dpdk/spdk4 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3439952 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3440678 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3441499 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3441932 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3442622 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3442760 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3443474 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3443491 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3443730 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3444921 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3445960 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3446147 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3446335 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3446539 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3446729 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3446889 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3447164 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3447344 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3447539 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3449886 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3450058 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3450218 00:36:33.903 Removing: /var/run/dpdk/spdk_pid3450341 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3450652 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3450772 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451081 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451097 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451374 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451388 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451558 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3451613 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452052 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452208 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452404 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452570 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452602 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452780 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3452944 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3453096 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3453364 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3453525 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3453684 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3453846 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3454110 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3454273 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3454429 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3454607 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3454855 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455012 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455175 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455371 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455594 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455755 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3455917 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3456197 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3456348 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3456513 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3456694 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3456898 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3458835 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3512788 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3515406 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3522848 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3526036 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3528380 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3528787 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3532752 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3536580 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3536590 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3537241 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3537780 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3538432 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3538837 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3538840 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3539096 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3539114 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3539240 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3539775 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3540432 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3541083 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3541488 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3541494 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3541641 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3542511 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3543233 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3548576 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3548739 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3551465 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3555570 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3557712 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3563961 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3569152 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3570349 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3571024 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3581294 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3583441 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3607803 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3610646 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3611821 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3613696 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3613834 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3613968 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3613990 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3614424 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3615737 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3616342 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3616761 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3618374 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3618682 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3619239 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3621621 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3624881 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3628424 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3651903 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3654548 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3658301 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3659238 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3660323 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3662861 00:36:34.161 Removing: /var/run/dpdk/spdk_pid3665099 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3669298 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3669306 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3672078 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3672329 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3672465 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3672731 00:36:34.162 Removing: /var/run/dpdk/spdk_pid3672736 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3673918 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3675612 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3676810 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3678046 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3679267 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3680447 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3684124 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3684578 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3685855 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3686593 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3690294 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3692155 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3695558 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3699010 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3705844 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3710295 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3710298 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3722219 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3722628 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3723156 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3723560 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3724142 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3724544 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3724955 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3725359 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3727853 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3727992 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3731792 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3731957 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3733566 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3739089 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3739096 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3741987 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3743384 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3744779 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3745525 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3746807 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3747677 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3752959 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3753334 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3753725 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3755276 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3755572 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3755954 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3758389 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3758399 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3759748 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3760216 00:36:34.421 Removing: /var/run/dpdk/spdk_pid3760224 00:36:34.421 Clean 00:36:34.421 13:44:31 -- common/autotest_common.sh@1451 -- # return 0 00:36:34.421 13:44:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:34.421 13:44:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.421 13:44:31 -- common/autotest_common.sh@10 -- # set +x 00:36:34.421 13:44:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:34.421 13:44:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.421 13:44:31 -- common/autotest_common.sh@10 -- # set +x 00:36:34.421 13:44:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:34.421 13:44:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:34.421 13:44:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:34.421 13:44:31 -- spdk/autotest.sh@391 -- # hash lcov 00:36:34.421 13:44:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:34.421 13:44:31 -- spdk/autotest.sh@393 -- # hostname 00:36:34.421 13:44:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:34.679 geninfo: WARNING: invalid characters removed from testname! 00:37:06.738 13:45:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.997 13:45:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.313 13:45:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:13.594 13:45:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.121 13:45:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.398 13:45:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:21.925 13:45:19 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:21.925 13:45:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.925 13:45:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:21.925 13:45:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.925 13:45:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.925 13:45:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.925 13:45:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.925 13:45:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.925 13:45:19 -- paths/export.sh@5 -- $ export PATH 00:37:21.925 13:45:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.925 13:45:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:21.925 13:45:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:37:21.925 13:45:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720784719.XXXXXX 00:37:21.925 13:45:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720784719.BYfobc 00:37:21.925 13:45:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:37:21.925 13:45:19 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:37:21.925 13:45:19 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:21.925 13:45:19 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:21.925 13:45:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:21.925 13:45:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:21.925 13:45:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:37:21.925 13:45:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:21.925 13:45:19 -- common/autotest_common.sh@10 -- $ set +x 00:37:21.925 13:45:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:21.925 13:45:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:37:21.925 13:45:19 -- pm/common@17 -- $ local monitor 00:37:21.925 13:45:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.925 13:45:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.925 13:45:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.925 13:45:19 -- pm/common@21 -- $ date +%s 00:37:21.925 13:45:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.925 13:45:19 -- pm/common@21 -- $ date +%s 00:37:21.925 13:45:19 -- pm/common@25 -- $ sleep 1 00:37:21.925 13:45:19 -- pm/common@21 -- $ date +%s 00:37:21.925 13:45:19 -- pm/common@21 -- $ date +%s 00:37:21.925 13:45:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784719 00:37:21.925 13:45:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784719 00:37:21.926 13:45:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784719 00:37:21.926 13:45:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784719 00:37:21.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784719_collect-vmstat.pm.log 00:37:21.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784719_collect-cpu-load.pm.log 00:37:21.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784719_collect-cpu-temp.pm.log 00:37:21.926 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784719_collect-bmc-pm.bmc.pm.log 00:37:22.863 13:45:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:37:22.863 13:45:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:22.863 13:45:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:22.863 13:45:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:22.863 13:45:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:22.863 13:45:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:22.863 13:45:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:22.863 13:45:20 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:22.863 13:45:20 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:22.863 13:45:20 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:22.863 13:45:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:22.863 13:45:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:22.863 13:45:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:22.863 13:45:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:22.863 13:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:22.863 13:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:22.863 13:45:20 -- pm/common@44 -- $ pid=3772199 00:37:22.863 13:45:20 -- pm/common@50 -- $ kill -TERM 3772199 00:37:22.863 13:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:22.863 13:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:22.863 13:45:20 -- pm/common@44 -- $ pid=3772201 00:37:22.863 13:45:20 -- pm/common@50 -- $ kill -TERM 3772201 00:37:22.863 13:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:22.863 13:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:22.863 13:45:20 -- pm/common@44 -- $ pid=3772203 00:37:22.863 13:45:20 -- pm/common@50 -- $ kill -TERM 3772203 00:37:22.863 13:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:22.863 13:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:22.863 13:45:20 -- pm/common@44 -- $ pid=3772233 00:37:22.863 13:45:20 -- pm/common@50 -- $ sudo -E kill -TERM 3772233 00:37:23.122 + [[ -n 3340048 ]] 00:37:23.122 + sudo kill 3340048 00:37:23.132 [Pipeline] } 00:37:23.148 [Pipeline] // stage 00:37:23.153 [Pipeline] } 00:37:23.169 [Pipeline] // timeout 00:37:23.174 [Pipeline] } 00:37:23.191 [Pipeline] // catchError 00:37:23.196 [Pipeline] } 00:37:23.213 [Pipeline] // wrap 00:37:23.220 [Pipeline] } 00:37:23.236 [Pipeline] // catchError 00:37:23.246 [Pipeline] stage 00:37:23.248 [Pipeline] { (Epilogue) 00:37:23.263 [Pipeline] catchError 00:37:23.264 [Pipeline] { 00:37:23.278 [Pipeline] echo 00:37:23.279 Cleanup processes 00:37:23.284 [Pipeline] sh 00:37:23.567 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:23.567 3772350 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:23.567 3772466 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:23.581 [Pipeline] sh 00:37:23.865 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:23.866 ++ grep -v 'sudo pgrep' 00:37:23.866 ++ awk '{print $1}' 00:37:23.866 + sudo kill -9 3772350 00:37:23.878 [Pipeline] sh 00:37:24.164 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:34.176 [Pipeline] sh 00:37:34.458 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:34.458 Artifacts sizes are good 00:37:34.472 [Pipeline] archiveArtifacts 00:37:34.479 Archiving artifacts 00:37:34.716 [Pipeline] sh 00:37:35.000 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:35.016 [Pipeline] cleanWs 00:37:35.027 [WS-CLEANUP] Deleting project workspace... 00:37:35.027 [WS-CLEANUP] Deferred wipeout is used... 00:37:35.035 [WS-CLEANUP] done 00:37:35.037 [Pipeline] } 00:37:35.059 [Pipeline] // catchError 00:37:35.074 [Pipeline] sh 00:37:35.354 + logger -p user.info -t JENKINS-CI 00:37:35.363 [Pipeline] } 00:37:35.380 [Pipeline] // stage 00:37:35.386 [Pipeline] } 00:37:35.405 [Pipeline] // node 00:37:35.411 [Pipeline] End of Pipeline 00:37:35.450 Finished: SUCCESS